1 Introduction

The Kardar–Parisi–Zhang equation [1] describes the stochastic dynamics of the height field, h(xt), of a growing interface in the continuum, as a function of time, or equivalently, of the free energy of a continuum directed polymer in a random potential as a function of its length. In one space dimension, \(x \in {\mathbb {R}}\), it is at the center of a vast universality class, the KPZ class, which contains numerous solvable discrete models with the same large scale behavior. Examples of such solvable models include random permutations and the associated PNG growth model, interacting particle systems (TASEP, ASEP, and variants), the stochastic six-vertex model and other stochastic vertex models, and several discrete models of directed polymers (DP). There has been many papers on the subject and we refer the reader to the reviews and lecture notes [2,3,4,5,6,7,8,9,10,11,12,13,14]. Exact results have also been obtained for the KPZ equation itself, and its equivalent system, the continuous directed polymer [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]. These results have been obtained on the full line, \(x \in {\mathbb {R}}\), and have allowed proving the existing conjectures for the scaling exponents of the height fluctuations, \(\delta h \sim t^{1/3} \sim x^{1/2}\), and to predict and classify the universal probability distributions which arise for various initial conditions. Most notably, the droplet and flat initial conditions were shown to lead, at large time, to the Tracy–Widom distributions [33, 34] for the height at one point (centered and scaled by \(t^{1/3}\)) associated respectively to the Gaussian unitary and orthogonal ensembles, GUE and GOE, of random matrix theory. Some of these predictions have been successfully tested in experiments [35,36,37,38,39,40].

It is also interesting for applications [41] to study models in the KPZ class restricted to the half line \(x \in {\mathbb {R}}_+\), for which fewer results are available at present. For some specific coupling to the boundary (at \(x=0\)) the solvability properties can sometimes be preserved. One can usually define a parameter, that we will call A, and which will be defined below in Eq. (2.3) for the KPZ equation, characterizing this coupling. For the KPZ equation, \(A=0\) and \(A=+\infty \) correspond respectively to Neumann and to Dirichlet boundary conditions. The half-line KPZ equation is equivalent to a continuum directed polymer on a half-space with a wall at \(x=0\), which is repulsive for \(A>0\), and attractive for \(A<0\). A remarkable feature of the half-space problem is the existence of a critical value \(A_c\) of the parameter A at which a phase transition occurs. In the polymer language it corresponds to a binding of the polymer to the wall if the attraction is strong enough \(A<A_c\). The existence of this transition was predicted by Kardar in [42] for the continuum directed polymer, using the replica Bethe ansatz. No prediction for the height distribution was obtained however.

In mathematics, in a pioneering paper in 1999, Baik and Rains [2] proved the existence of a similar transition in the context of the longest increasing sub-sequences (LIS) of symmetrized random permutations. There it was shown that, in the unbound phase, for the droplet initial condition, and near \(x=0\), the (scaled) height fluctuations obey the Tracy–Widom distribution associated to the Gaussian symplectic ensemble (GSE). In the bound phase \(A<A_c\), the fluctuations are simply Gaussian. Exactly at the transition, \(A=A_c\), fluctuations also obey the Tracy–Widom distribution but the one associated to the Gaussian orthogonal ensemble (GOE). These results were extended in other models, e.g. a GSE-GUE crossover was shown as the endpoint position is varied towards the bulk in the PNG model [43]. For the TASEP in a half-space, equivalent to last passage percolation in a half-quadrant [3], similar results were obtained in [44, 45] using Pfaffian-Schur processes [46], in particular concerning the crossover as the parameter controlling the boundary is varied simultaneously with the distance to the boundary. All these half-space models discussed above can be studied via the framework of Pfaffian point processes and random matrix theoretic techniques (these models are free-fermionic). However these models do not converge to the KPZ equation. The models converging to the KPZ equation such as the asymmetric simple exclusion process (ASEP) or directed polymers are not directly related to Pfaffian point processes (they are non-free-fermionic). Among those, ASEP was studied in [47,48,49] and the half-space log-gamma polymer was studied in [50,51,52].

For the KPZ equation in the half-space, a solution for the one-point height distribution near the wall valid at any time, was obtained for \(A=+\infty \), for the droplet initial condition using the replica Bethe ansatz in [53] (see also [54]). Another solution (also non-rigorous) was obtained for \(A=0\) in [55], with related but different methods using nested contour integral representations. In both cases the large time limit is found to be GSE Tracy–Widom , consistent with the general picture obtained by Baik and Rains as discussed above. Next, taking the limit from a half-space ASEP, a rigorous solution for the critical case \(A=- \frac{1}{2}\) was obtained [49], leading to GOE Tracy–Widom fluctuations. More recently, in [56], a solution valid for any time was found using the replica Bethe ansatz for any \(A> -1/2\), and which leads to the GOE Tracy–Widom distribution at the critical point \(A = -1/2\). In [57] a solution from the RBA, taking into account bound states, was obtained, in agreement with the results of [56].

A remarkable property of the KPZ equation on the full line is that the stationary measure is the Brownian motion in the sense that if the initial condition h(x, 0) is a two sided Brownian motion (with the appropriate amplitude) the PDF of the height difference between two space points is time independent. This leaves a uniform shift h(0, t) whose fluctuations, scaled by \(t^{1/3}\), was shown to follow the so-called Baik–Rains distribution, which is universal over the KPZ class. For the KPZ equation, the solution for all time with Brownian initial condition was found using the RBA in [30] and proved rigorously in [31]. Early investigations on stationary models in the KPZ universality class started with [58] in the context of the polynuclear growth model, which introduced the Baik–Rains distribution as the limiting distribution of height fluctuations. For a very similar model (TASEP), the spatial correlations were investigated in [59]. Outside the class of free fermionic models, besides [30, 31] that we have already discussed, let us also mention [60] which proved the one point convergence of ASEP height function towards the Baik–Rains distribution.

The aim of the present paper is to address the same problem, but for the half-line. We study the KPZ equation on the half-line with a boundary parameter A and an initial condition chosen as a unit one-sided Brownian motion with a drift, which we denote for later convenience as \(- (B + \frac{1}{2})\). The problem is thus determined by two parameters, A and B and we will study the phase diagram in the (AB) plane. As we show, on the line \(A+B+1=0\) the initial condition is stationary, in the same sense as above, i.e. the PDF of the height difference between two space points remains at all time the one of the same Brownian motion. We show furthermore that the distribution of the height shift, h(0, t), is invariant under the exchange of parameters A and B. We will use two methods to obtain the exact generating function which characterizes the distribution of h(0, t), at any time t. The first one is the replica Bethe ansatz and is a generalization of the calculation presented in [56]. The second one starts from a known moment formula for the so-called log-gamma polymer [61], and takes the continuum limit to the KPZ equation. The obtained formula is valid for any point in the quadrant \(A,B > -1/2\). We then study its large time limit, leading to the phase diagram of Fig. 1. In the (AB) plane, the point \((A,B)= (-1/2,-1/2)\) plays a special role, as the system is critical with respect to the boundary, and at the same time stationary, and will be called critical stationary. At this point, we show that in the large time limit the fluctuations of the interface are governed by a universal distribution, that we express in terms of a simple Fredholm determinant, equivalently in terms of the Painlevé II transcendent. In a sense, it is the analog for the half-line problem, of the Baik–Rains distribution for the full line. Inside the quadrant \(A,B > -1/2\) the distribution is obtained as the GSE Tracy–Widom distribution and on scales \(A+\frac{1}{2}, B + \frac{1}{2} \sim t^{-1/3}\), there is a universal two-parameter crossover distribution that we obtain in the quadrant. In the so-called bound phase \(A<-1/2\) or \(B<-1/2\) away from the critical stationary point (including along the line \(A+B+1=0\)), the fluctuations are expected to be Gaussian, except when \(A=B<-1/2\) where we expect the fluctuations to be distributed as the maximum eigenvalue of a \(2\times 2\) GUE matrix. The crossover to this behavior is however beyond the scope of this paper. Finally, note that our results will be consistent in the limit \(B \rightarrow +\infty \) with all previous results for the droplet initial condition, in particular with the ones in [56] for \(A\geqslant -1/2\). The level of mathematical rigor of all these results is discussed in Sect. 2.4.

It is important to mention that very recently Betea et al. [62] studied stationary half-space KPZ growth for a discrete model, the last-passage-percolation with exponential weights (i.e. a zero-temperature polymer). They obtained a formula for the asymptotic height distribution, depending on several parameters controlling the distance to the boundary and the position on the line \(A+B+1\) near the critical stationary point. We expect, from the universality within the KPZ class, that our present result and theirs should match. The Pfaffian formula of [62, Theorem 2.7] and our formulae (2.25), (2.27) and (2.28) look different. Although we also have a Pfaffian representation, Eq. (2.20), the associated kernels are different and we do not have a general method to show the equivalence of the Fredholm Pfaffians. A similar issue is discussed in [56, Sect. 4.2, Eq. (74)] . Note that our formula allows for a very easy numerical evaluation of the CDF, given below in Fig. 3, and of the first moments, and allows us to determine tail estimates.

It would be interesting to study the distribution of h(xt) when x is at a distance of order \(t^{2/3}\) from 0, and AB are scaled close to \(-1/2\). This would correspond to varying the parameter \(\eta \) in [62]. However, while we can obtain some integral formula for the moments of Z(xt) in the case \(x>0\), see (4.19) below, we do not expect that it can be rewritten as a Pfaffian formula and the asymptotic analysis would require to develop other methods. We leave this for future consideration.

1.1 Outline

First in Sect. 2 we define the models and make a summary of the main results and formulae obtained in this paper. In the two following sections we compute the moments of the polymer partition sum, i.e. the exponential moments of the KPZ field, by two methods. In Sect. 3 we present the derivation using the Bethe ansatz. In Sect. 4 we obtain the moments starting from the log-gamma polymer, and check that the two moment formulae coincide. In Sect. 5 we obtain, a Pfaffian formula for the Laplace transform generating function by summing up the moments. This leads to our first result, valid for all times and any \(A,B > -1/2\), for the generating function as a Fredholm Pfaffian in terms of a matrix kernel. The large time limit of this formula, and of the matrix kernel, is studied in Sect. 6. In Sect. 7 we extend the method described in [54] to obtain a formula for the Laplace transform generating function in terms of a scalar kernel, valid for all times and \(A,B > -1/2\). In Sect. 7.2 we perform the large time limit on this scalar kernel, which leads to a two parameter family of interpolating kernels near the point \((A,B)=(-1/2,-1/2)\). From it we obtain various limits, including our formula for the critical stationary distribution.

2 Model and Main Results

2.1 Model

In this paper we study the KPZ equation, which reads, in dimensionless units

$$\begin{aligned} \partial _t h(x,t) = \partial _x^2 h(x,t) + (\partial _x h(x,t))^2 + \sqrt{2} \, \xi (x,t) \end{aligned}$$
(2.1)

where \(\xi (x,t)\) is the standard space-time white noise, with \({\mathbb {E}}[ \xi (x,t) \xi (x',t') ]=\delta (x-x') \delta (t-t')\). One introduces, via the Cole-Hopf mapping, the directed polymer partition sum \(Z(x,t) = e^{h(x,t)}\), where h(xt) is solution of the KPZ equation (2.1). It satisfies the multiplicative noise stochastic heat equation (SHE)

$$\begin{aligned} \partial _t Z(x,t) = \partial _x^2 Z(x,t) + \sqrt{2} \, \xi (x,t) \, Z(x,t) \end{aligned}$$
(2.2)

understood here with the Ito prescription. Equation (2.2) means that Z(xt) can be seen as a partition sum over continuum directed paths in the random potential \(-\sqrt{2} \, \xi (x,t)\), with the endpoint at time t fixed at position x.

Definition 2.1

We consider the SHE on the half-line \(x \geqslant 0\) with boundary parameter A and \((x,t)\mapsto Z(x,t)\) to be the solution to (2.2) (it can be shown that the solution is unique, see [63, 64] and references therein) with the boundary condition

$$\begin{aligned} \partial _x Z(x,t)_{\mid x=0}=A Z(0,t). \end{aligned}$$
(2.3)

and with the Brownian initial data, in presence of a drift \(-1/2-B\)

$$\begin{aligned} Z(x,0)=e^{{\mathcal {B}}(x)-(1/2+B) x} \end{aligned}$$
(2.4)

We have shifted by 1/2 the drift parameter to make more explicit a remarkable symmetry between parameters A and B. Indeed, we show (see Claim 4.9) that for any \(A,B\in {\mathbb {R}}\), we have the equality in distribution

$$\begin{aligned} Z_A^{B}(x=0,t)= Z_B^{A}(x=0,t), \text { for any }t>0. \end{aligned}$$

When B goes to \(+\infty \), we recover a result recently proved in [65, Theorem 1.1].

When \(A+B+1=0\), the model defined in Definition 2.1 is stationary in the sense that for any fixed time t, the spatial process \(\lbrace Z(x,t)/Z(0,t)\rbrace _{x\geqslant 0}\) has the same distribution as \(\lbrace Z(x,0)\rbrace _{x\geqslant 0}\), that is the exponential of a standard Brownian motion with drift \(-1/2-B\). Equivalently, the distribution of the slope field \(\partial _x h(x,t)\) is time-stationary. Let us explain where this condition \(A+B+1=0\) comes from. In Sect. 4, we consider a discretization of the KPZ equation, the log-gamma directed polymer. We identify in Sect. 4.2 initial and boundary conditions for the log-gamma polymer which make increments of the partition function stationary in time, using a result from [61] which deals with the full-space case. The log-gamma polymer partition function converges weakly to the stochastic heat equation from Definition 2.1 at high temperature, see details in Sect. 4.3 (note that we provide only a sketch of proof of this result based on the combination of results from [66] to [65]). Hence we may pass to the limit, and taking into account the precise scalings, we obtain that the spatial process \(\lbrace Z(x,t)/Z(0,t)\rbrace _{x\geqslant 0}\) is stationary when \(A+B+1=0\).

There may exist other initial conditions for the half-space KPZ equation (or equivalently the stochastic heat equation) such that the slope field \(\partial _x h(x,t)\) is time-stationary. Indeed, the KPZ equation arises as a scaling limit of the height function of particle systems such as ASEP for which other stationary distributions exist (see Refs. [67,68,69,70,71]).

Before presenting our main results, let us clarify the meaning of the boundary condition (2.3). As a process in x, Z(xt) has the same regularity as a Brownian motion, hence \(\partial _x Z(x,t)\) cannot be associated to a real value. To make sense of (2.3), we say [63] that Z(xt) is a solution of (2.2) if it satisfies

$$\begin{aligned} Z(x,t) = \int _{0}^{\infty } p^A_t(x,y)Z(y,0){{\mathrm {d}}} y + \int _0^{\infty }{{\mathrm {d}}} y \int _{0}^t p^A_{t-s}(x,y)Z(y,s)\xi (y,s) {{\mathrm {d}}} s, \end{aligned}$$
(2.5)

where the last integral is an Ito integral and \(p^A_t(x,y)\) is the heat kernel on the positive half-line (i.e. it solves the equation \(\partial _t u = \Delta u\) with initial data \(\delta _y\)) that satisfies the boundary condition

$$\begin{aligned} \partial _x p^A_t(x,y) \big \vert _{x=0} = A\;p^A_t(0,y), \quad \quad t>0, y>0. \end{aligned}$$
(2.6)

The main consequence that we will use below is that

$$\begin{aligned} \partial _{x_i} {\mathbb {E}}\left[ Z(x_1,t) \dots Z(x_n,t) \right] \Big \vert _{x_i=0} = A \; {\mathbb {E}}\left[ Z(x_1,t) \dots Z(x_n,t) \right] \Big \vert _{x_i=0}, \quad 1\leqslant i\leqslant n, \end{aligned}$$
(2.7)

which can be obtained by replacing \(Z(x_i,t)\) using (2.5) inside the expectation and differentiating with respect to \(x_i\).

In terms of directed polymers, Z(xt) can be represented as a partition sum over directed paths

$$\begin{aligned} Z(x,t)={\mathbb {E}}_{{\mathcal {B}}}\left[ \int _0^{+\infty }{\mathrm {d}}y \, e^{{\mathcal {B}}(y)-(B+1/2)y}\int _{x(0)=y}^{x(t)=x}{\mathcal {D}}x(\tau )e^{-\int _0^t {\mathrm {d}}\tau [\frac{1}{4}(\frac{{\mathrm {d}}x}{{\mathrm {d}}\tau })^2-\sqrt{2}\eta (x(\tau ),\tau )+2A \delta (x(\tau ))]}\right] \, , \end{aligned}$$
(2.8)

where \(\eta \) is a space-time white noise and \({\mathcal {D}}\) denotes the “measure on paths”; more precisely the path integral is defined as an expectation value over reflected Brownian bridges \(x(\tau )\in {\mathbb {R}}_+\) (reflected at \(x=0\)) for a given realization of the Brownian initial condition \({\mathcal {B}}(y)\), followed by an expectation over the Brownian \({\mathcal {B}}\). The extra \(\delta \) interaction ensures the proper boundary condition at \(x=0\) for Z(xt), see [55, Sect. 3.2].

2.2 Presentation of the Main Results

Our main results concern the height at \(x=0\), h(0, t). In general its large time behavior is expected to be

$$\begin{aligned} h(0,t) \simeq v^{A,B}_{\infty } t + t^{\beta } \chi \end{aligned}$$
(2.9)

where \(\chi \) is an \({\mathcal {O}}(1)\) random variable, and \(\beta \) the growth fluctuation exponent. In the quadrant \(A,B \geqslant -1/2\), to which our exact results are restricted, one has \(\beta =1/3\) and \(v^{A,B}_{\infty }= - \frac{1}{12}\). Hence to present these results, we define everywhere the shifted variable

$$\begin{aligned} H(t) = h(0,t) + \frac{t}{12} \end{aligned}$$
(2.10)

Note however that in the so-called bound phase, which will not be studied in great detail here, we expect a different value of \(v^{A,B}_{\infty }\) with \(\beta =1/2\) and different distributions for \(\chi \) (see Sect. 4.6). In the limit \(B \rightarrow +\infty \), i.e. for the droplet initial condition, it was found [57] that \(v^{A,+\infty }_{\infty } = - \frac{1}{12} + (A+\frac{1}{2})^2\) for \(A \leqslant -1/2\). In the general AB case, we expect that \( v^{A,B}_{\infty } = - \frac{1}{12} + \Big (\min \big \lbrace A+\frac{1}{2} , B+\frac{1}{2}, 0\big \rbrace \Big )^2\), based on a heuristic argument presented in Sect. 4.6.

2.2.1 Finite Time: Fredholm Pfaffian of Matrix Kernel

Our main result valid for all time \(t \geqslant 0\) and all \(A,B > - \frac{1}{2}\) is that the following generating function defined for \(\varsigma >0\) can be written as a Fredholm Pfaffian

$$\begin{aligned} {\mathbb {E}}\left[ \exp (-\varsigma We^{H(t)}) \right] =1+\sum _{n_s=1}^{+\infty } \frac{(-1)^{n_s}}{n_s!}\prod _{p=1}^{n_s} \int _{\mathbb {R}} {\mathrm {d}}r_p \frac{\varsigma }{\varsigma +e^{-r_p}}{\mathrm{Pf}}\left[ K(r_i,r_j)\right] _{n_s \times n_s}. \end{aligned}$$
(2.11)

Here \(W\) is a random variable, with an inverse gamma distribution of parameter \(A+B+1\), see (3.22), independent from H(t), which enters in the construction of the generating function. The kernel K is matrix valued and represented by a \(2\times 2\) block matrix with elements

$$\begin{aligned} \begin{aligned} K_{11}(r,r')&=\iint _{C^2} \frac{{\mathrm {d}}w}{2\mathbf{i }\pi }\frac{{\mathrm {d}}z}{2\mathbf{i }\pi }\frac{w-z}{w+z}G(w)G(z)\cos (\pi w)\cos (\pi z)e^{ -rw-r'z + t \frac{w^3+z^3}{3} },\\ K_{22}(r,r')&=\iint _{C^2} \frac{{\mathrm {d}}w}{2\mathbf{i }\pi }\frac{{\mathrm {d}}z}{2\mathbf{i }\pi }\frac{w-z}{w+z}G(w)G(z)\frac{\sin (\pi w)}{\pi }\frac{\sin (\pi z)}{\pi }e^{ -rw-r'z + t \frac{w^3+z^3}{3}},\\ K_{12}(r,r')&=\iint _{C^2} \frac{{\mathrm {d}}w}{2\mathbf{i }\pi }\frac{{\mathrm {d}}z}{2\mathbf{i }\pi }\frac{w-z}{w+z}G(w)G(z)\cos (\pi w)\frac{\sin (\pi z)}{\pi }e^{-rw-r'z + t \frac{w^3+z^3}{3}},\\ K_{21}(r,r')&=-K_{12}(r',r). \end{aligned} \end{aligned}$$
(2.12)

where the dependence in parameters AB only appears in the function

$$\begin{aligned} G(z) = \frac{\Gamma (A+\frac{1}{2}-z)}{\Gamma (A+\frac{1}{2}+z)}\frac{\Gamma (B+\frac{1}{2}-z)}{\Gamma (B+\frac{1}{2}+z)} \Gamma (2z) \end{aligned}$$
(2.13)

and the contour C is an upwardly oriented vertical line parallel to the imaginary axis with real part between 0 and \(\min \lbrace A+\frac{1}{2},B+\frac{1}{2},1\rbrace \). The series in (2.11) can also be interpreted as a Fredholm Pfaffian, see Eq. (5.17) and Appendix B. The kernel (2.12) has a similar structure as the kernel defining the GSE Tracy–Widom distribution [72]. It is not entirely obvious that the integrals over the \(r_i\) in (2.11) are well-defined, but this is the case. Indeed, one can show that (i) all the entries of K have exponential decay as \(r,r'\) go too \(+\infty \), using a standard contour shift argument, see e.g. [51, Lemma 6.4] (ii) all entries of K grow at most polynomially with \(\vert r\vert , \vert r'\vert \), which can be shown using a variant of [51, Lemma 7.11].

2.2.2 Finite Time: Result in Terms of a Scalar Kernel

The matrix kernel in (2.12) has the structure of a Schur Pfaffian. Following [54] and Appendix B, we are able to express the generating function in terms of the Fredholm determinant of a scalar kernel

$$\begin{aligned} {\mathbb {E}}\left[ \exp (-\varsigma We^{H(t)}) \right] =\sqrt{\mathrm {Det}(I- {\bar{K}}_{t,\varsigma })_{{\mathbb {L}}^2({\mathbb {R}}_+)}}. \end{aligned}$$
(2.14)

were the kernel \({\bar{K}}_{t,\varsigma }\) defined for all \((x,y)\in {\mathbb {R}}_+^2\) as

$$\begin{aligned} {\bar{K}}_{t,\varsigma }(x,y) =2\partial _x \iint _{C^2} \frac{{\mathrm {d}}w {\mathrm {d}}z}{(2\mathbf{i }\pi )^2 }G(z)G(w) \frac{\sin (\pi (z-w))}{\sin (\pi (z+w))}\varsigma ^{w+z}e^{-xz-yw + t \frac{w^3+z^3}{3} } \end{aligned}$$
(2.15)

where the function G(z) is defined in (2.13). Again this formula is valid for for all time \(t \geqslant 0\) and all \(A,B > - \frac{1}{2}\). In principle from formula (2.11) or (2.14) the PDF of H(t) for any time t can be extracted, see e.g. [53] for the case \(A,B=+\infty \). Here, we only extract the PDF’s in the large time limit, as we now discuss.

Remark 2.2

The kernel \({\bar{K}}_{t,\varsigma }\) can be extended by adding a fictitious variable so that the r.h.s of (2.14) is a \(\tau \)-function of the Kadomtsev-Petviashvili (KP) equation, see Appendix E.

Remark 2.3

Other cases where it is possible to transform a matrix-valued kernel into a scalar kernel have been considered in the random matrix literature, see Refs. [34, Sects. II–III] and [73, Sect. 3.1].

2.2.3 Large Time Limit and Phase Diagram

The phase diagram in the (AB) plane in the large time limit is shown in Fig. 1. Qualitatively there are three regions. In the region \(A<-1/2\) with \(A<B\), we expect, from the results of [57] for \(B=+\infty \), that the polymer is “bound to the wall” and that the (scaled) height distribution at large time is Gaussian (see also analogous results for other models in [74, 44, Sect. 6], [51, Sect. 8.1]). By “bound to the wall”, we mean that the polymer path spends most of its time at the boundary and does not significantly venture into the bulk, this phenomenon was predicted by Kardar [42] who studied the depinning of the polymer by the random environment. By symmetry the same can be expected for the region \(B<-1/2\) with \(B<A\), which corresponds to a polymer “bound to the Brownian”. In the special case where \(A=B<-1/2\), the nature of fluctuations is different, there is a competition between the boundary and the initial condition and we expect that the fluctuations have the same distribution as the largest eigenvalue of a \(2\times 2\) GUE matrix, based on heuristic arguments presented in Sect. 4.6. We have, however, no exact formula for the region \(A<-1/2\) or \(B<-1/2\) called the bound phase. Our exact results concern the third region, the quadrant \(A,B \geqslant -1/2\).

The first result is that for any fixed \(A,B > -1/2\), the distribution of the height H(t) converges at large time to the GSE Tracy–Widom distribution \(F_4\)

$$\begin{aligned} \lim _{t\rightarrow \infty } {\mathbb {P}}\left( \frac{H(t)}{t^{1/3}}\leqslant s\right) =F_4(s), \quad \quad A,B>-1/2. \end{aligned}$$
(2.16)

When \(A=-1/2\), and for any \(B > -1/2\) we find that the fluctuations are given the GOE Tracy–Widom distribution.

$$\begin{aligned} \lim _{t\rightarrow \infty } {\mathbb {P}}\left( \frac{H(t)}{t^{1/3}}\leqslant s\right) =F_1(s), \quad \quad A=-1/2, B>-1/2. \end{aligned}$$
(2.17)

By symmetry the same holds for \(B=-1/2\) and any \(A>-1/2\). These results are natural to expect. Indeed, when \(B>-1/2\), the initial condition has a drift so negative that the asymptotics of the height function should be the same as for the narrow wedge initial data. The limiting distribution then depends on the value of the boundary parameter A according to the Baik–Rains transition discussed in the Introduction, hence the GSE and GOE Tracy–Widom asymptotics.

To study the “critical stationary” point \((A,B)=(-1/2,-1/2)\) we writeFootnote 1

$$\begin{aligned} A + \frac{1}{2} = t^{-1/3} a, \quad \quad B + \frac{1}{2} = t^{-1/3} b, \end{aligned}$$
(2.18)

and consider the large time limit at fixed values of \(a,b\). This corresponds to zooming around the critical stationary point as represented in Fig. 2. It is natural to expect – and we indeed show – that in the large time limit there is a two parameter family of CDFs \(F^{(a, b)}(s)\), indexed by \(a,b\) such that

$$\begin{aligned} \lim _{t\rightarrow \infty } {\mathbb {P}}\left( \frac{H(t)}{t^{1/3}}\leqslant s\right) := F^{(a, b)}(s). \end{aligned}$$
(2.19)
Fig. 1
figure 1

Phase diagram indicating the distribution of height fluctuations at large time, as a function of the parameters AB. The nature of fluctuations in the dashed area around \((A,B)=(-1/2,-1/2)\) is explained in Fig. 2

Fig. 2
figure 2

Zoom into the vicinity of \((A,B) = (-1/2, -1/2) \). The distribution of height fluctuations at large time is indicated as a function of parameters \(a=t^{1/3}(A+\frac{1}{2}), b=t^{1/3}(B+\frac{1}{2})\)

Here, we first obtain a Fredholm Pfaffian formula for the CDF \(F^{(a, b)}(s)\) with \(a,b>0\), by taking the large time limit of the matrix kernel formula (2.11). It reads

$$\begin{aligned} F^{(a,b)}(s) = \left( 1+\frac{\partial _s}{a+b} \right) {\mathrm{Pf}}(J-K^{(a, b)})_{{\mathbb {L}}^2(s,+\infty )} \end{aligned}$$
(2.20)

where the large time matrix kernel \(K^{(a, b)}\) is given in (6.5). Equivalently, a more convenient formula is obtained by taking the large time limit of the scalar kernel formula (2.14). We then obtain the CDF of the one-point KPZ height H(t) in the critical region, in terms of a Fredholm determinant, for \(a,b>0\)

$$\begin{aligned} F^{(a, b)}(s) =\left( 1+\frac{\partial _s}{a+b} \right) \sqrt{\mathrm {Det}(I-{{\bar{K}}}^{(a, b)})_{{\mathbb {L}}^2(s,+\infty )}} \end{aligned}$$
(2.21)

where the scalar transition kernel \({{\bar{K}}}^{(a,b)}\) takes the form

$$\begin{aligned} {{\bar{K}}}^{(a, b)}(x,y) = \int _{0}^{+\infty } d \lambda A^{(a, b)}(x+\lambda )A^{(a, b)}(y+\lambda )\,\mathrm -\frac{1}{2} A^{(a, b)}(x) \int _{0}^{+\infty }{{\mathrm {d}}}\lambda \, A^{(a, b)}(y+\lambda )\, . \end{aligned}$$
(2.22)

Here the function \(A^{(a, b)}(x)\) is defined by the integral representation

$$\begin{aligned} A^{(a, b)}(x) = \int \frac{{{\mathrm {d}}} z}{2\mathbf{i }\pi } \frac{a+z}{a-z} \frac{b+z}{b-z} e^{-xz+ \frac{z^3}{3}}, \end{aligned}$$
(2.23)

where the contour is a upwardly oriented vertical line with real part between 0 and \(\min \lbrace a, b\rbrace \). Finally, introducing the operator \({\hat{A}}_s\) with kernel \({\hat{A}}_s(x,y)= A^{(a, b)}(x+y+s)\), the final and simplest expression for the cross-over CDF obtained from an algebraic manipulations of (2.21) is

$$\begin{aligned} F^{(a, b)}(s) = \frac{1}{2} \left( 1 + \frac{\partial _s}{a+b}\right) \left( \mathrm {Det}(I - {\hat{A}}_s)_{{\mathbb {L}}^2({\mathbb {R}}_+)} + \mathrm {Det}(I + {\hat{A}}_s)_{{\mathbb {L}}^2({\mathbb {R}}_+)}\right) \end{aligned}$$
(2.24)

It is clear on this formula that if \(a,b\rightarrow +\infty \) simultaneously, then \(A^{(a, b)}(x)\) converges to the standard Airy function, and \({{\bar{K}}}^{(a, b)}\) to the kernel associated to the GSE Tracy–Widom distribution (in the form found in [53]). This thus matches smoothly with the result (2.16) valid for any fixed \(A,B>-1/2\). Another interesting limit, that we call \(F^{(a)}(s)= \lim _{b\rightarrow +\infty } F^{(a,b)}(s)\), is the limit \(b\rightarrow + \infty \) at fixed \(a\), which corresponds to the droplet initial condition in the critical region for the wall parameter. A formula for that CDF was obtained, for \(a\geqslant 0\), using the RBA in [56]. It was conjectured to coincide with the GSE-GOE-Gaussian crossover introduced by Baik and Rains [74], see also [44, 45] in the context of last passage percolation. This crossover was also studied in the context of spiked models of random matrices from the GSE [75]. By the \(A \leftrightarrow B\) symmetry, the case \(A\rightarrow +\infty \) is similar, and we obtain the same distribution \(F^{(b)}(s)= \lim _{a\rightarrow +\infty } F^{(a,b)}(s)\), which corresponds to the model with Brownian initial data in presence of an infinite repulsive wall (see [64] for a more mathematical interpretation). This is consistent with Tracy–Widom GOE fluctuations for any fixed \(A=-1/2\) and \(B>-1/2\) \((a=0,b=\infty )\) or fixed \(B=-1/2\) and \(A>-1/2\) (\(b=0, a=\infty )\).

A more difficult limit, which we discuss now, is the stationary critical point \((A,B)=(-1/2,-1/2)\) corresponding to both \(a,b\rightarrow 0\).

2.3 Stationary Critical Distribution

At the point \((A,B)=(-1/2,-1/2)\) we have found a remarkable universal distribution, corresponding to the CDF \(F(s):=F^{(0,0)}(s)\). Taking the limit \(a, b\rightarrow 0\) is delicate (as it is also to obtain the Baik–Rains distribution) and we found that the representation of the kernel \({{\bar{K}}}^{(a, b)}\) as in (2.22) was crucial. The limit is performed in Sect. 7.3. We have shown that the limit is well defined, i.e. independent of the ratio \(r=b/a\). We have obtained the result in several equivalent forms. We recall that we are characterizing the CDF F(s) such that

$$\begin{aligned} \lim _{t\rightarrow \infty } {\mathbb {P}}\left( \frac{H(t)}{t^{1/3}}\leqslant s\right) = F(s), \quad \quad A=B=-1/2. \end{aligned}$$
(2.25)

The first form is in terms of the sum of two Fredholm determinants. Defining the following two kernels acting on functions in \({\mathbb {L}}^2( {\mathbb {R}}_+)\),

$$\begin{aligned} \mathrm {Ai}_s(x,y) = {\mathrm {Ai}}(s + x + y), \quad \quad {\widetilde{\mathrm {Ai}}}_s(x,y) = {\mathrm {Ai}}(s + x + y) + \int _0^{+\infty } {\mathrm {d}}\lambda \, {\mathrm {Ai}}(s + x + \lambda ),\nonumber \\ \end{aligned}$$
(2.26)

then

$$\begin{aligned} F(s) = \partial _s \left[ 2 \mathrm {Det}(I + {{\widetilde{\mathrm {Ai}}}}_s) + (s-2)\mathrm {Det}(I + \mathrm {Ai}_s)\right] . \end{aligned}$$
(2.27)

The second form is expressed in terms of the CDF’s of the GOE and GUE Tracy–Widom distributions \(F_1\) and \(F_2\) respectively, as

$$\begin{aligned} F(s)=\partial _s\left[ \frac{F_2(s)}{F_1(s)} \int _{-\infty }^s {\mathrm {d}}t \, \frac{F_1(t)^4}{F_2(t)^2} \right] . \end{aligned}$$
(2.28)

which is very reminiscent of the formula for the Baik–Rains distribution for the full space stationary problem [recalled in (C.12)]. The third form is expressed in terms of the Hastings–McLeod solution q(s) to the Painlevé II equation as

$$\begin{aligned} F(s)=\partial _s\left[ e^{ - \frac{1}{2} \int _s^{+\infty } {{\mathrm {d}}} r [(r-s)q^2(r)-q(r)] } \int _{-\infty }^{s} {\mathrm {d}}r \, e^{-2\int _r^{+\infty } {\mathrm {d}}t \, q(t)} \right] . \end{aligned}$$
(2.29)

The first moments and cumulants are given in the Table 1 and we plot in Fig. 3, the CDF F together with its derivative, the PDF. Finally, we computed and plotted in Appendix D the asymptotics of the CDF F(s)

  • for large positive s

    $$\begin{aligned} \begin{aligned}&1 - F(s) \\&= \frac{s^{3/4}e^{-\frac{2 s^{3/2}}{3}}}{4\sqrt{\pi }} \left[ 1+\frac{139s^{-3/2}}{48 }-\frac{11423s^{-3}}{4608 } +\frac{3907027s^{-9/2}}{663552 } \right. \\&\quad \left. -\frac{2886147455s^{-6}}{127401984 }+o(s^{-6})\right] \end{aligned} \end{aligned}$$
    (2.30)
  • and for large negative s

    $$\begin{aligned} \begin{aligned} F(s)&=2^{-203/48}e^{\zeta '(-1)/2} \exp \big [-\frac{\left| s\right| ^3}{24}-\frac{\left| s\right| ^{3/2}}{\sqrt{2}}+\frac{23}{16}\log \left| s\right| +\frac{91}{8 \sqrt{2} \left| s\right| ^{3/2}}\\&\quad -\frac{3957}{128 \, \left| s\right| ^3}+\frac{28717}{128 \sqrt{2} \left| s\right| ^{9/2}}-\frac{469683}{512 \left| s\right| ^6}+o(s^{-6})\big ] \end{aligned} \end{aligned}$$
    (2.31)
Table 1 Mean, variance, skewness and excess kurtosis of the half-space critical stationary distribution and comparison with the Tracy–Widom and Baik–Rains distributions (see [76, Sect. 9.4.1] and [77])

As we mentioned above, it remains to be shown that our formula is equivalent to the Fredholm Pfaffian formula obtained in [62, Theorem 2.7] (setting there \(\delta =0\) and \(u=0\)) as expected from universality.

Fig. 3
figure 3

Left: Critical stationary CDF F. Right: corresponding PDF. See Fig. 6 in Appendix. D for the comparison with the asymptotics (\(s\rightarrow \pm \infty \)) in true and logarithmic scales

2.4 Mathematical Aspects

The results presented in this article rely on a combination of physics and mathematics methods, but we focus in this article on physics results and make clear here that most of our results are not proved according to the standards of rigor of the mathematics literature (in particular the results stated as Claims below). It remains a challenge to turn the arguments that we present here into mathematical theorems. Let us comment further on these aspects for the mathematically inclined reader.

The first difficulty from the mathematical point of view is that we cannot rigorously characterize the distribution of the KPZ equation through its moments, because they grow too fast to uniquely determine the distribution. This is why the moment generating series that we consider in Sect. 5 are actually divergent series, but the formal power series become convergent after certain manipulations and exchanges of series/integrals. For the full-space KPZ equation, it has been proved (see e.g. [55]) that these manipulations lead to the correct answer. It could be possible to overcome this issue in our case by working on a model for which the moment problem is well-defined, and take a scaling limit to the KPZ equation. Such a strategy has been implemented for instance in [22, 31, 78, 79] for the full-space KPZ equation and in [49] for the half-space KPZ equation with \(A=-1/2\) and droplet initial data. Another possible approach is provided by the framework of half-space Macdonald processes [51] which allows to prove Laplace transform formulae despite the divergence of moments.

The second obstacle is that in order to prove the results of Sect. 3 below, one would need to prove the completeness of the Bethe ansatz eigenfunctions. Actually, we present in Sect. 4 another approach to obtain the same moment formulae. It relies on rigorous formulae for the log-gamma polymer from [51], and we take a scaling limit to the KPZ equation. We obtain a nested contour integral formulae for the moments of \(Z_A^B(x,t)\) in Claim 4.7. Note that this formula allows to take \(x>0\). Then, for \(x=0\), we may move the contours together appealing to a combinatorial conjecture from Borodin, Bufetov and Corwin [55, Conjecture 5.2], and a Pfaffian structure appears. Hence we see that assuming completeness of Bethe ansatz eigenfunctions or using this conjecture leads to the same moment formula. The results from [80, 81] suggest that the two problems are indeed related.

Finally, the asymptotic analysis of Fredholm Pfaffians such as (2.11) is delicate, especially in the critical stationary regime. In particular, we have first performed a large time limit for positive ab, and then let the parameters ab go to 0. This allowed us to benefit from the structure of the kernel (2.22), which eventually lead to a very simple formula for the distribution F(s). It would be interesting to find a generalization of the form (2.22) at finite time, and prove that the limits commute, i.e. one can take first \(A,B\rightarrow -1/2\) and then study the large time limit.

3 Moments from the Replica Bethe Ansatz

3.1 Quantum Mechanics and Bethe Ansatz

In this Section we use the replica Bethe ansatz method to calculate the integer moments of the partition sum. The equal time multi-point moments of the solution of the SHE, Z(xt), over the KPZ noise can be expressed [82] as a matrix element of the quantum mechanical evolution operator in imaginary time of the Lieb–Liniger model [83]

$$\begin{aligned} {\mathbb {E}} \left[ Z (x_1,t) \dots Z (x_n,t)\right] = \langle x_1 \dots x_n | e^{- t H_n} |\Psi (t=0) \rangle \end{aligned}$$
(3.1)

Here \(H_n\) is the Hamiltonian of the Lieb Liniger model [83] for n quantum particles with attractive delta function interactions of strength \(c=-{{\bar{c}}} <0\)

$$\begin{aligned} H_n = - \sum _{i=1}^n \partial _{x_i}^2 - 2 {{\bar{c}}} \sum _{1 \leqslant i < j \leqslant n} \delta (x_i-x_j) \end{aligned}$$
(3.2)

with here an below, in our units \({{\bar{c}}}=1\). The initial state \(\mathinner {|{\Psi (t=0)}\Big \rangle }\) is such that

$$\begin{aligned} {\mathbb {E}}\left[ Z (x_1,0) \dots Z (x_n,0)\right] = \langle x_1 \dots x_n | \Psi (t=0) \rangle \end{aligned}$$
(3.3)

Since here we are considering the Brownian initial condition and interested in averages both over the Brownian and the KPZ noise we must take the initial state \(|\Psi (t=0) \rangle \) as

$$\begin{aligned} \langle x_1 \dots x_n | \Psi (t=0) \rangle = \Phi _0(x_1,\dots ,x_n) := {\mathbb {E}}_{{\mathcal {B}}}\left[ \exp \left( \sum _{j=1}^n {\mathcal {B}}(x_j) -(B+1/2)x_j \right) \right] \nonumber \\ \end{aligned}$$
(3.4)

A simple calculation shows that \(\Phi _0(x_1,\dots ,x_n)\) is the fully symmetric function which in the sector \(0 \leqslant x_1<\dots \leqslant x_n\) takes the form

$$\begin{aligned} \Phi _0(x_1,\dots ,x_n) = \exp \left( \sum _{j=1}^n \frac{1}{2} (2n-2j+1)x_j -(B+1/2) x_j \right) , \end{aligned}$$
(3.5)

We can now rewrite (3.1) at coinciding points using the decomposition of the evolution operator \(e^{- t H_n}\) in terms of the eigenstates of \(H_n\) as

$$\begin{aligned} {\mathbb {E}}\left[ Z (x,t)^n\right] = \sum _\mu \Psi _\mu (x,\dots , x) \langle \Psi _\mu | \Phi _0 \rangle \frac{1}{||\mu ||^2} e^{- t E_\mu } \end{aligned}$$
(3.6)

Here the un-normalized eigenfunctions of \(H_n\) are denoted \(\Psi _\mu \) (of norm denoted \(||\mu ||\)) with eigenenergies \(E_\mu \). Here we used the fact that only symmetric (i.e. bosonic) eigenstates contribute since the initial and final states are fully symmetric in the \(x_i\). Hence the \(\sum _\mu \) denotes a sum over all bosonic eigenstates of the Lieb–Liniger model, also called delta Bose gas, and \(\langle \Psi _\mu | \Phi _0 \rangle \) denotes the overlap, i.e. the Hermitian scalar product of the initial state (3.5) with the eigenstate \(\Psi _\mu \).

We should remember now that \(H_n\) is defined on the half-line \(x \geqslant 0\). The boundary condition at the wall with parameter A translates into the same boundary condition for the wavefunctions (in each of their coordinate). This half-line quantum mechanical problem can be solved by the Bethe ansatz for \(A=+\infty \), i.e. for Dirichlet boundary condition [47, 84, 85] (see also [86, Sect. 5.1]) and this fact was used in [87]. It can also be solved for arbitrary A, [55, 86, 88,89,90,91,92,93] which led to the moment formula in [56, 94] and [57].

From the Bethe ansatz the eigenstates \(\Psi _\mu \) are thus Bethe states, i.e. superpositions of plane waves over all permutations P of the n rapidities \(\lambda _j\) for \(j\in [1,n]\) with an additional summation over opposite pairs \(\pm \lambda _j\) due to the infinite hard wall. The bosonic (fully symmetric) eigenstates can be obtained everywhere from their expression in the sector \(0\leqslant x_1 \leqslant \dots \leqslant x_n\), which reads

$$\begin{aligned} \begin{aligned} \Psi _\mu (x_1,\dots ,x_n)&= \frac{1}{(2 \mathbf{i })^{n}} \sum _{P \in S_n} \prod _{p=1}^n \left( \sum _{\varepsilon _p=\pm 1} \varepsilon _p e^{\mathbf{i }\varepsilon _p x_p \lambda _{P(p)}} A[\varepsilon _1 \lambda _{P(1)},\varepsilon _2 \lambda _{P(2)}, \dots , \varepsilon _n \lambda _{P(n)}] \right) \\ A[\lambda _1,\dots ,\lambda _n]&= \prod _{n \geqslant \ell > k \geqslant 1} \left( 1+ \frac{\mathbf{i }{{\bar{c}}}}{\lambda _\ell - \lambda _k}\right) \left( 1+ \frac{\mathbf{i }{{\bar{c}}}}{\lambda _\ell + \lambda _k}\right) \prod _{\ell =1}^n \left( 1+\mathbf{i }\frac{\lambda _\ell }{A}\right) \end{aligned} \end{aligned}$$
(3.7)

This wavefunction automatically satisfies both

  1. 1.

    The matching condition arising from the \(\delta (x_i-x_j)\) interaction

    $$\begin{aligned} \left( \partial _{x_{i+1}}-\partial _{x_i} +{\bar{c}}\right) \Psi _\mu (x_1,\dots ,x_n)\mid _{x_{i+1}=x_i}=0 \end{aligned}$$
    (3.8)
  2. 2.

    The boundary condition \(\partial _{x_i}\Psi _\mu (x_1,\dots ,x_n)\big \vert _{x_i=0}=A \Psi _\mu (x_1,\dots ,x_n)\big \vert _{x_i=0}\) for all \(i \in [0,n]\).

The allowed values for the rapidities \(\lambda _i\), which parametrize the true physical eigenstates are determined by the Bethe equations arising from the boundary conditions at \(x=L\) as discussed below. One will find that the normalized eigenstates \(\psi _\mu =\Psi _\mu /||\mu ||\) vanish as \((\lambda _i-\lambda _j)\) or \((\lambda _i+\lambda _j)\) when two rapidities become equal or opposite: hence the rapidities obey an exclusion principle.

The detailed Bethe equations, which determine the allowed values for the set of rapidities \(\lbrace \lambda _j \rbrace \), depend on the choice of boundary condition at \(x=L\). However, in the \(L\rightarrow +\infty \) limit, these details do not matter. For simplicity we choose a hardwall at \(x=L\). The Bethe equations then read

$$\begin{aligned} e^{2\mathbf{i }\lambda _j L}=\frac{A-\mathbf{i }\lambda _j}{A+\mathbf{i }\lambda _j} \prod _{\ell \ne j}\frac{\lambda _j-\lambda _\ell -\mathbf{i }{\bar{c}}}{\lambda _j-\lambda _\ell +\mathbf{i }{\bar{c}}}\frac{\lambda _j+\lambda _\ell -\mathbf{i }{\bar{c}}}{\lambda _j+\lambda _\ell +\mathbf{i }{\bar{c}}} \end{aligned}$$
(3.9)

In the case of the infinite hardwall, these equations are also given in Ref. [85] and their solutions in the large L limit were studied in Ref. [95]. The structure of the states for infinite L is found similar to the standard case, i.e. the general eigenstates are built by partitioning the n particles into a set of \(n_s\) bound-states formed by \(m_j \geqslant 1\) particles with \(n=\sum _{j=1}^{n_s} m_j\). Each bound state \(\mu \) is indexed by a set of \(\{ k_j, m_j\}_{j=1\dots n_s}\) where the \(k_j\)’s are real numbers. These states are perfect strings [96] , i.e. a set of rapidities

$$\begin{aligned} \lambda ^{j, a}=k_j +\frac{\mathbf{i }{{\bar{c}}}}{2}(m_j+1-2a) \end{aligned}$$
(3.10)

where \(a = 1,\dots ,m_j\) labels the rapidities within the string. Such eigenstates have momentum and energy

$$\begin{aligned} K_\mu =\sum _{j=1}^{n_s} m_j k_j, \qquad E_\mu =\sum _{j=1}^{n_s} m_j k_j^2-\frac{{{\bar{c}}}^2}{12} m_j(m_j^2-1). \end{aligned}$$
(3.11)

The ground-state corresponds to a single n-string with \(k_1=0\). The difference with the standard case is that the states are now invariant by a sign change of any of the momenta \(\lambda _j \rightarrow -\lambda _j\), i.e. \(k_j \rightarrow -k_j\). From now on, we will denote the wavefunctions of the string states as \(\Psi _{\{k_\ell ,m_\ell \}}\).

It is important to note that although for \(A=+\infty \) the strings are the only solutions of the Bethe equations at large L, for finite A there are other solutions which correspond to so-called boundary bound states. These solutions have been obtained and studied in details in [57]. As we will see below we will not need them in this work.

3.2 Moment Formula

To calculate the n-th moments of Z(xt) from formula (3.6), we need to perform a summation over the eigenstates. For \(A<+\infty \) these eigenstates contain both the string states and the boundary bound states mentioned above. Our strategy here will be similar to the one in [56], i.e. we will calculate the moments for \(n < 2 A + 1\) which turn out to be sufficient to perform the analytic continuation in n and obtain the generating function for any \(A> -1/2\), using a method similar to the one in [30]. The nice feature is that when \(n < 2 A+1\) there are no boundary bound states. To see that, consider the Table 1 in [57] which contains the classification of the boundary bound states for this problem. For \(A > -1/2\), a m particle boundary bound state must obey \(m \geqslant {\lfloor }2 A{\rfloor }+2 \geqslant 2A +1\). On the other hand from \(n < 2 A+1\), one must have \(m \leqslant n < 2 A+1\), which excludes the bound state. Hence we need to consider only the string states.

A formula for the inverse of the squared norm of an arbitrary string state was obtained for \(A< +\infty \) in [94] and [57], consistent with the results of [56], as

$$\begin{aligned} \begin{aligned} \Vert \mu \Vert ^2&:= \int _0^L {\mathrm {d}}x_1\dots \int _0^L {\mathrm {d}}x_n |\Psi _{\{k_\ell ,m_\ell \}}(x_1,\dots , x_n)|^2 \\ \frac{1}{||\mu ||^2}&= \frac{1}{n!} {{\bar{c}}}^{n-n_s} 2^{n_s} \prod _{i=1}^{n_s} S_{k_i,m_i} H_{k_i,m_i}\prod _{1 \leqslant i<j \leqslant n_s} D_{k_i,m_i,k_j,m_j} L^{-n_s} \\ D_{k_1,m_1,k_2,m_2}&=\left( \frac{4 (k_1-k_2)^2 + (m_1-m_2)^2 c^2}{4 (k_1-k_2)^2 + (m_1+m_2)^2 c^2}\right) \times \left( \frac{4 (k_1+k_2)^2 + (m_1-m_2)^2 c^2}{4 (k_1+k_2)^2 + (m_1+m_2)^2 c^2}\right) \\ S_{k,m}&= \frac{2^{2m-2}}{m^2} \prod _{p=1}^{[m/2]} \frac{4 k^2 + c^2 (m-2 p)^2}{4 k^2+c^2 (m+1-2 p)^2} \\ H_{k,m}&= \prod _{a=1}^m \frac{A^2}{A^2+(k+\frac{\mathbf{i }{\bar{c}}}{2}(m+1-2a))^2} \end{aligned}\nonumber \\ \end{aligned}$$
(3.12)

with \(S_{k,1}=1\). Note that we have only kept the leading term in L as \(L\rightarrow +\infty \). Inserting the norm formula (3.12) into (3.6), we obtain the starting formula for the integer moments of the partition sum with Brownian weight on the endpoint in the limit \(L \rightarrow +\infty \)

$$\begin{aligned} \begin{aligned} {\mathbb {E}}\left[ Z (x,t)^n\right]&= \sum _{n_s=1}^n \frac{ 2^{n_s} {{\bar{c}}}^n }{n_s! {{\bar{c}}}^{n_s} n! } \prod _{p=1}^{n_s} \sum _{m_p \geqslant 1} \int _{\mathbb {R}} \frac{{\mathrm {d}}k_p}{2 \pi } m_p S_{k_p,m_p}H_{k_p,m_p} e^{ (m_p^3-m_p) \frac{{{\bar{c}}}^2 t}{12} - m_p k_p^2 t } \\&\quad \times \delta _{n,\sum _{j=1}^{n_s} m_j} \prod _{i<j}^{n_s} D_{k_i,m_i,k_j,m_j} \Psi _{\{k_\ell ,m_\ell \}}(x,\dots , x) \, \langle \Psi _{\{k_\ell ,m_\ell \}} | \Phi _0 \rangle \end{aligned} \end{aligned}$$
(3.13)

Here the Kronecker delta enforces the constraint \(\sum _{j=1}^{n_s} m_j=n\) with \(m_j \geqslant 1\) and in the summation over states we used \(\sum _{k_j} \rightarrow m_j L \int _{\mathbb {R}} \frac{{\mathrm {d}}k}{2\pi }\) which holds also here in the large L limit: the momenta sums become continuous and one can use that the string momenta \(m_j k_j\) correspond to free particles as in Refs. [15, 24, 25, 57, 87].

We can simplify the factor \(\Psi _{\{k_\ell ,m_\ell \}}(x,\dots ,x)\) in (3.13). For the general Bethe state (3.7) (before insertion of the string solution), the \(x=0\) limit then reads reads

$$\begin{aligned} \Psi _{\mu }(0,\dots ,0) = \frac{n!}{A^n} \prod _{j=1}^n \lambda _j \end{aligned}$$
(3.14)

Inserting the string solution we see that we can replace in (3.13) at the wall \(x=0\)

$$\begin{aligned} \Psi _{\{k_\ell ,m_\ell \}}(0,\dots ,0)= & {} \frac{n!}{A^n} \prod _{j=1}^{n_s} A_{k_j,m_j} \end{aligned}$$
(3.15)
$$\begin{aligned} A_{k,m}= & {} \prod _{a=1}^m \left( k+ \mathbf{i }\frac{{{\bar{c}}}}{2} (m+1-2 a)\right) = (-\mathbf{i }{{\bar{c}}})^m \frac{\Gamma (\frac{1+m}{2} + \frac{\mathbf{i }k}{{{\bar{c}}}} )}{\Gamma (\frac{1-m}{2} + \frac{\mathbf{i }k}{{{\bar{c}}}} )} \end{aligned}$$
(3.16)

To obtain the n-th moment in (3.13) we still need to calculate the overlap \(\langle \Psi _{\{k_\ell ,m_\ell \}} | \Phi _0 \rangle \) where \(\Phi _0\) is given in (3.5). In general it involves sums over permutations and leads to complicated expressions but in our case, a simple structure emerges akin to the one known in full-space for a few initial conditions (droplet, half-flat, Brownian). Here, as we find in the Appendix A, the result in the half-space for Brownian initial conditions is quite simple

$$\begin{aligned} \langle \Psi _\mu | \Phi _0 \rangle =\frac{n!}{A^n} \frac{\Gamma (A+B+1)}{\Gamma (A+B-n+1)} \prod _{j=1}^n \frac{\lambda _j}{B^2+\lambda _j^2} \end{aligned}$$
(3.17)

This holds under the condition that the integral converge, that is \(\frac{n}{2} < B+ \frac{1}{2}\), which we will also assume from now on. Inserting the rapidities \(\lambda _j\) of the string state one see that the denominator in the product in the overlap (3.17) read

$$\begin{aligned} \begin{aligned} E_{k,j}&=\prod _{a=1}^{m} \frac{1}{B^2+(k+\mathbf{i }\frac{{{\bar{c}}}}{2}(m+1-2a))^2} \\&=\frac{1}{{\bar{c}}^{2m}}\frac{\Gamma (\frac{1-m}{2}+\frac{B+\mathbf{i }k}{{\bar{c}}})}{\Gamma (\frac{1+m}{2}+\frac{B+\mathbf{i }k}{{\bar{c}}})}\frac{\Gamma (\frac{1-m}{2}+\frac{B-\mathbf{i }k}{{\bar{c}}})}{\Gamma (\frac{1+m}{2}+\frac{B-\mathbf{i }k}{{\bar{c}}})} \end{aligned} \end{aligned}$$
(3.18)

while the numerator was already calculated in (3.15). We can thus define \(C_{k,j}=A^2_{k,j}E_{k,j}\) and putting all together we obtain the starting expression for the integer moments, denoting here and below \(Z(0,t)=Z(t)\)

$$\begin{aligned} \begin{aligned} {\mathbb {E}}\left[ Z (t)^n\right]&= \frac{\Gamma (A+B+1)}{\Gamma (A+B-n+1)} \sum _{n_s=1}^n \frac{2^{n_s} {{\bar{c}}}^n n! }{n_s! {{\bar{c}}}^{n_s} } \\&\quad \times \prod _{p=1}^{n_s} \sum _{m_p \geqslant 1}\int _{\mathbb {R}} \frac{{\mathrm {d}}k_p}{2 \pi }m_p C_{k_p,m_p} S_{k_p,m_p}H_{k_p,m_p} e^{ (m_p^3-m_p) \frac{{{\bar{c}}}^2 t}{12} - m_p k_p^2 t } \\&\qquad \delta _{n,\sum _{j=1}^{n_s} m_j} \, \prod _{i<j}^{n_s} D_{k_i,m_i,k_j,m_j} \end{aligned} \end{aligned}$$
(3.19)

where we recall the constraint \(\sum _{j=1}^{n_s} m_j=n\). Let us use \({{\bar{c}}}=1\) from now on. Denoting

$$\begin{aligned} \begin{aligned} B_{k,m}&= 4 m^2 C_{k,m} S_{k,m} H_{k,m}\\&=\frac{2 k}{\pi } \sinh (2 \pi k) \Gamma (2 \mathbf{i }k +m) \Gamma (-2 \mathbf{i }k + m)\\&\quad \times \frac{\Gamma (\frac{1-m}{2}+A+\mathbf{i }k)}{\Gamma (\frac{1+m}{2}+A+\mathbf{i }k)}\frac{\Gamma (\frac{1-m}{2}+A-\mathbf{i }k)}{\Gamma (\frac{1+m}{2}+A-\mathbf{i }k)} \frac{\Gamma (\frac{1-m}{2}+B+\mathbf{i }k)}{\Gamma (\frac{1+m}{2}+B+\mathbf{i }k)}\frac{\Gamma (\frac{1-m}{2}+B-\mathbf{i }k)}{\Gamma (\frac{1+m}{2}+B-\mathbf{i }k)} \end{aligned} \end{aligned}$$
(3.20)

The starting formula for the moments is then

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\left[ Z (t)^n\right] =\frac{\Gamma (A+B+1)}{\Gamma (A+B-n+1)} \\&\sum _{n_s=1}^n \frac{ n! 2^{n_s} }{n_s! } \prod _{p=1}^{n_s} \sum _{m_p \geqslant 1} \int _{\mathbb {R}} \frac{{\mathrm {d}}k_p}{2 \pi } \frac{B_{k_p,m_p}}{4 m_p} e^{ (m_p^3-m_p) \frac{ t}{12} - m_p k_p^2 t } \delta _{n,\sum _{j=1}^{n_s} m_j} \prod _{i<j}^{n_s} D_{k_i,m_i,k_j,m_j} \end{aligned} \end{aligned}$$
(3.21)

where \(B_{k,m}\) is given in (3.20) and \(D_{k_i,m_i,k_j,m_j}\) is given in (3.12) and where we recall the constraint \(\sum _{j=1}^{n_s} m_j=n\).

3.3 Decorated Moments

As in Refs. [30, 97], it is useful to eliminate the Gamma factor \(\frac{\Gamma (A+B+1)}{\Gamma (A+B-n+1)}\) in (3.21). To this aim we introduce a random variable \(W\sim \mathrm {Gamma}^{-1}(A+B+1)\), independent of the KPZ height, which is inverse gamma distributed with parameter \(A+B+1\), in this case

$$\begin{aligned} p_W(x)=\frac{1}{\Gamma (A+B+1)}x^{-A-B-2}e^{-\frac{1}{x}}\Theta (x) \end{aligned}$$
(3.22)

The n-th moment of \(W\) is given by

$$\begin{aligned} {\mathbb {E}}[W^n]=\frac{\Gamma (A+B-n+1)}{\Gamma (A+B+1)} \end{aligned}$$
(3.23)

As we will see \( {\mathbb {E}}\left[ W^n Z (t)^n\right] \) will serve as the basis to form a Fredhom Pfaffian.

3.4 Moments in Terms of a Pfaffian

An important identity, which makes the problem solvable in the end, is that the inverse norms of the states can be expressed as a Schur Pfaffian. Introducing the reduced variables \(X_{2p-1} = m_p + 2 \mathbf{i }k_p \) and \(X_{2p} = m_p - 2 \mathbf{i }k_p\) for \(p\in [1,n_s]\), the norm reads

$$\begin{aligned} \prod _{1 \leqslant i<j \leqslant n_s} D_{k_i,m_i,k_j,m_j} = \prod _{j=1}^{n_s} \frac{m_j}{2 \mathbf{i }k_j} \underset{2 n_s \times 2 n_s}{\mathrm{Pf}} \left[ \frac{X_i-X_j}{X_i+X_j} \right] \end{aligned}$$
(3.24)

where we recall that the Pfaffian of an anti-symmetric matrix A of size \(N \times N\) is defined by

$$\begin{aligned} {\mathrm{Pf}}(A)=\sqrt{{\mathrm{Det}}(A)}=\sum _{\begin{array}{c} \sigma \in S_N,\\ \sigma (2p-1)<\sigma (2p) \end{array}}{\mathrm{sign}}(\sigma )\prod _{p=1}^{N/2}A_{\sigma (2p-1),\sigma (2p)} \end{aligned}$$
(3.25)

and that the Schur Pfaffian is given by (see Ref. [98])

$$\begin{aligned} {\mathrm{Pf}} \left[ \frac{X_i-X_j}{X_i+X_j} \right] =\prod _{i<j}\frac{X_i-X_j}{X_i+X_j}\, . \end{aligned}$$
(3.26)

Hence the starting formula for the moments now becomes:

$$\begin{aligned}&{\mathbb {E}} \left[ W^n Z (t)^n\right] \nonumber \\&\quad = \sum _{n_s=1}^n \frac{ n! }{n_s! } \prod _{p=1}^{n_s} \sum _{m_p \geqslant 1} \int _{\mathbb {R}}\frac{{\mathrm {d}}k_p}{2 \pi } \frac{B_{k_p,m_p}}{4 \mathbf{i }k_p} e^{ (m_p^3-m_p) \frac{ t}{12} - m_p k_p^2 t } \delta _{n,\sum _{j=1}^{n_s} m_j} ~ \underset{2 n_s \times 2 n_s}{\mathrm{Pf}}\left[ \frac{X_i-X_j}{X_i+X_j} \right] \end{aligned}$$
(3.27)

4 Moments from the Log-Gamma Polymer

In this section we compute again the moments of the solution of the SHE, Z(xt), taking a limit of a known formula for the moments of the partition function of the log-gamma polymer on the half-quadrant square lattice. This method uses the convergence of the log-gamma polymer to the KPZ equation at high temperature and a combinatorial conjecture of Borodin-Bufetov-Corwin [55, Conjecture 5.2]. We also discuss in Sects. 4.5 and 4.6 useful identities in distribution coming from symmetries in so-called half-space Macdonald processes [51].

4.1 Moment Formula for the Log-Gamma Polymer

Definition 4.1

(Half-space log-gamma polymer) Let \(\alpha _{\circ }, \alpha _1, \alpha _2, \dots \) be real parameters such that \(\alpha _i+\alpha _{\circ }>0\) for all \(i \geqslant 1\) and \(\alpha _i+\alpha _j>0\) for all \(i\ne j\geqslant 1\). The half-space log-gamma polymer is a probability measure on up-right paths confined in the half-quadrant \( \lbrace (i,j)\in {\mathbb {Z}}_{>0}^2 : i\geqslant j \rbrace \) (see Fig. 4), where the probability of an admissible path \(\pi \) between (1, 1) and (nm) is given by

$$\begin{aligned} \frac{1}{{\mathcal {Z}}(n,m)} \ \ \prod _{(i,j)\in \pi } w_{i,j}, \end{aligned}$$

and where \(\big (w_{i,j}\big )_{i\geqslant j}\) is a family of independent random variables such that for \(i>j, w_{i,j}\sim \mathrm {Gamma}^{-1}(\alpha _i + \alpha _j)\) and \(w_{i,i}\sim \mathrm {Gamma}^{-1}(\alpha _{\circ }+ \alpha _i)\). The notation \(\mathrm {Gamma}^{-1}(\theta )\) denotes the inverse of a Gamma distributed random variable with shape parameter \(\theta \). The partition function \({\mathcal {Z}}(n,m)\) is given by

$$\begin{aligned} {\mathcal {Z}}(n,m) = \sum _{\pi : (1,1) \rightarrow (n,m)} \prod _{(i,j)\in \pi } w_{i,j}. \end{aligned}$$
Fig. 4
figure 4

An admissible path in the half space log-gamma polymer model, that is a path proceeding by unit steps rightward and upward in the half quadrant \( \lbrace (i,j)\in {\mathbb {Z}}_{>0}^2 : i\geqslant j \rbrace \)

The moments of the partition function were computed using half-space Macdonald processes in [51].

Proposition 4.2

( [51, Corollary 6.36]) For \(n\geqslant m\) and \(k\in {\mathbb {Z}}_{>0}\) such that \(k< \min \lbrace 2\alpha _i, \alpha _i+\alpha _{\circ }\rbrace \),

$$\begin{aligned} {\mathbb {E}}[{\mathcal {Z}}(n,m)^k]= & {} \oint \frac{{\mathrm {d}}z_1}{2\mathbf{i }\pi }\cdots \oint \frac{{\mathrm {d}}z_k}{2\mathbf{i }\pi } \prod _{1\leqslant a<b\leqslant k} \frac{z_a-z_b}{z_a-z_b-1}\, \frac{z_a+z_b}{1+ z_a+z_b}\nonumber \\&\times \prod _{i=1}^{k} \frac{2z_i}{z_i-\alpha _{\circ }+1/2}\prod _{j=1}^n \left( \frac{1}{\alpha _j-z_i-1/2} \right) \prod _{j=1}^{m} \left( \frac{1}{z_i + \alpha _j-1/2}\right) , \end{aligned}$$
(4.1)

where the contours are such that for all \(1\leqslant c\leqslant k\), the contour for \(z_c\) encloses \(\lbrace - \alpha _j+1/2\rbrace _{1\leqslant j\leqslant m}\) and \(\lbrace z_{c+1}+1, \dots , z_k+1\rbrace \), and excludes the poles of the integrand at \( \alpha _{\circ }- 1/2\) and \( \alpha _j-1/2\) (for \(1\leqslant j\leqslant n\)). Because the integrand decays at least quadratically at infinity, one may chose the contours to be all vertical lines such that the contour for the variable \(z_i\) is \(r_i+\mathbf{i }{\mathbb {R}}\) where

$$\begin{aligned} \max _j\lbrace k - \alpha _j-1/2 \rbrace< r_k+k-1< \dots<r_2+2< r_1 < \min _j \lbrace \alpha _{\circ }-1/2, \alpha _j-1/2 \rbrace . \end{aligned}$$

Note that if \(k>\alpha _i+\alpha _j\) or \(k>\alpha _i+\alpha _{\circ }\) for some \(i<j\), the k-th moment of Z(nm) fails to exist.

Remark 4.3

One may also compute mixed moments of the partition functions at several points along a down-right path.

4.2 Stationary Structure for the Log-Gamma Polymer

In this paragraph, we will need to assume \(\alpha _{\circ }+\alpha _1=0\). In order to do so, we need to consider a modified partition function where we have removed the weight \(w_{1,1}\), i.e. we define

$$\begin{aligned} {\mathcal {Z}}^{stat}(n,m) = \frac{{\mathcal {Z}}(n,m)}{w_{1,1}}. \end{aligned}$$
(4.2)

Following [61], we define horizontal and vertical increments of the partition function as

$$\begin{aligned} U_{n,m} = \frac{{\mathcal {Z}}^{stat}(n,m)}{{\mathcal {Z}}^{stat}(n-1,m)}, \;\;V_{n,m} = \frac{{\mathcal {Z}}^{stat}(n,m)}{{\mathcal {Z}}^{stat}(n,m-1)}. \end{aligned}$$
(4.3)

The partition function satisfies the recurrence

$$\begin{aligned} {\mathcal {Z}}^{stat}(n,m) = w_{n,m}({\mathcal {Z}}^{stat}(n-1,m)+{\mathcal {Z}}^{stat}(n,m-1)), \end{aligned}$$
(4.4)

where by convention we have assumed that \({\mathcal {Z}}^{stat}(n,m)=0\) if (nm) does not belong to the half quadrant \( \lbrace (i,j)\in {\mathbb {Z}}_{>0}^2 : i\geqslant j \rbrace \). From there, one may deduce a recurrence for the increments

$$\begin{aligned} U_{n,m} = w_{n,m}\left( 1 +\frac{U_{n,m-1}}{V_{n-1,m}} \right) ,\;\;\;V_{n,m} = w_{n,m}\left( 1 +\frac{V_{n-1,m}}{U_{n,m-1}} \right) . \end{aligned}$$
(4.5)

We will need the following lemma from [61] where the stationary structure for the full-space log-gamma polymer was introduced.

Lemma 4.4

( [61, Lemma 3.2]) Let UVw be independent random variables. Let

$$\begin{aligned} U'=w\left( 1+\frac{U}{V} \right) , \;\; V'=w\left( 1+\frac{V}{U} \right) , \;\; w'=\left( \frac{1}{U} +\frac{1}{V}\right) ^{-1}. \end{aligned}$$
(4.6)

If for some \(\alpha >0\) and \(\theta \in (-\alpha , \alpha )\), \(U\sim \mathrm {Gamma^{-1}}(\alpha +\theta )\), \(V\sim \mathrm {Gamma^{-1}}(\alpha -\theta )\), \(w\sim \mathrm {Gamma^{-1}}(2\alpha )\), then the triples (UVw) and \((U', V', w')\) have the same distribution.

Coming back to our model \({\mathcal {Z}}^{stat}(n,m)\), when \(\alpha _{\circ }+\alpha _1=0\) the model is stationary in the following sense.

Proposition 4.5

Let \(k\in {\mathbb {Z}}_{\geqslant 1}\). Assume that \(\alpha _2=\alpha _3=\dots = \alpha >0\) and \(\alpha _{\circ }+\alpha _1=0\). Consider a down-right path in the lattice going through the points \(\lbrace (n_i, m_i)\rbrace _{1\leqslant i\leqslant k}\), such that \((n_{i+1},m_{i+1})-(n_i,m_i)\) equals either \((0,-1)\) or (1, 0) (see Fig. 5). We associate to this down-right path increments \(\left\{ I_j \right\} _{1\leqslant j\leqslant k-1}\) where

$$\begin{aligned} I_j= {\left\{ \begin{array}{ll}U_{n_{j+1}, m_{j+1}} \text { when }n_{j+1}>n_{j},\\ V_{n_j, m_j} \text { when } m_j>m_{j+1}.\end{array}\right. } \end{aligned}$$

Then the increments \(\left\{ I_j \right\} _{1\leqslant j\leqslant k-1}\) are all independent and distributed as \(I_j\sim \mathrm {Gamma^{-1}}(\alpha _1+\alpha )\) when \(I_j\) is a horizontal U increment, and \(I_j\sim \mathrm {Gamma^{-1}}(\alpha _{\circ }+\alpha )\) when \(I_j\) is a vertical V increment. In particular, for any m, the increments \(\left\{ U_{n,m} \right\} _{n\geqslant m+1}\) are independent and distributed as \(U_{n,m}\sim \mathrm {Gamma^{-1}}(\alpha _1+\alpha )\).

Proof

The distribution of increments along the first row is completely constrained by the definition of the model. Indeed, we have that \({\mathcal {Z}}^{\mathrm{stat}}(n,1) = \prod _{i=2}^n w_{i,1}\), so that the increments along the first row are given by \(U_{n,1}=w_{n,1}\) and the definition of the model implies that weights \(w_{n,1}\sim \mathrm {Gamma^{-1}}(\alpha _1+\alpha )\) are independent. Hence, for \(m=1\), the increments \(\left\{ U_{n,m}\right\} _{n\geqslant 2}\) are independent and distributed as \(\mathrm {Gamma^{-1}}(\alpha _1+\alpha )\) as claimed.

In other terms, we have seen that the statement of the Proposition is true for the infinite path going through the points (n, 1) for all \(n\geqslant 1\). We will show that the property is preserved under two types of local transformation of paths, depicted in Fig. 5, that consist in

  1. 1.

    (boundary update) Lifting one unit upwards the starting point of the path along the boundary;

  2. 2.

    (bulk update) Transforming a down-right step into a right-down step.

Fig. 5
figure 5

The two types of elementary local transformations of down-right paths considered in the proof of Proposition 4.5. The thick black path represents an arbitrary down-right path. The portions in red represent the local modifications of the path that we consider

It is clear that any infinite down-right path can be obtained by iteration of these local transformations, starting from the path going through the points (n, 1) for all \(n\geqslant 1\). Furthermore, if the statement of the Proposition is true for any infinite down-right path, it is true as well for any subpath of the form \(\lbrace (n_i, m_i)\rbrace _{1\leqslant i\leqslant k}\) as in the statement of the Proposition. Hence we only need to show that the distribution of increments is preserved under the two local moves.

The distribution of increments on the boundary is constrained by the definition of the model. We have that \({\mathcal {Z}}^{\mathrm{stat}}(n,n) = w_{n,n}{\mathcal {Z}}^{\mathrm{stat}}(n,n-1)\), so that \(V_{n,n}= w_{n,n}\) and we recall that \(w_{n,n}\sim \mathrm {Gamma^{-1}}(\alpha _{\circ }+\alpha _n)\) is independent from all other weights. Hence, for any \(n=m\), \(V_{n,m}\) is distributed as \(\mathrm {Gamma^{-1}}(\alpha _{\circ }+\alpha _n)\) and is independent from the increments \(\lbrace U_{n',m'}, V_{n',m'}\rbrace \) for \(m'<m\) (since those increments are independent from \(w_{n,n}\)). Hence, after a boundary update, the distribution of increments is preserved.

In order to show that the property is preserved under bulk update, we use Lemma 4.4. After a bulk update, the increments are updated according to (4.5), where \(w_{n,m}\) is independent from the increments on the earlier path and distributed as \(\mathrm {Gamma^{-1}}(2\alpha )\) (recall that we have assumed that \(\alpha _2=\alpha _3=\dots = \alpha \)). If \(\alpha _{\circ }+\alpha _1=0\), we may set \(\theta = \alpha _1=-\alpha _{\circ }\) and Lemma 4.4 implies that increments along the new path will be distributed as \(U_{n,m}\sim \mathrm {Gamma^{-1}}(\alpha +\alpha _1)\), \(V_{n,m}\sim \mathrm {Gamma^{-1}}(\alpha +\alpha _{\circ })\). This shows that the distribution of increments is preserved under bulk update. Because before the bulk update, the variables (UVw) are independent from the rest of the increments \(I_j\) by induction, and the new random variables \((U' , V' )\) are just measurable functions of (UVw), the new variables are also independent of the other increments \(I_j\). This concludes the proof. \(\square \)

One consequence of the stationary structure is that we may compute the expectation of \(\log {\mathcal {Z}}^{\mathrm{stat}}(n,m)\). We assume that parameters \(\alpha _i\) are chosen as in Proposition 4.5. Observe that \(\log {\mathcal {Z}}^{\mathrm{stat}}(n,m)\) is equal to the sum of the logarithms of increments of the partition function along any path from (1, 1) to (nm). These increments are not independent, so that their sum is a highly non trivial random variable, but we know the expectation of each increment. Since the vertical increments are distributed as \(\mathrm {Gamma^{-1}}(\alpha _{\circ }+\alpha )\) and the horizontal increments are distributed as \(\mathrm {Gamma^{-1}}(\alpha _1+\alpha )\), we have that (for \(\alpha _{\circ }+\alpha _1=0\))

$$\begin{aligned} {\mathbb {E}}\left[ \log \mathcal {\mathcal {Z}}^{\mathrm{stat}}(n,m) \right] = - (n-1) \psi (\alpha _1+\alpha ) -(m-1) \psi (\alpha _{\circ }+\alpha ), \end{aligned}$$
(4.7)

where we have used that \({\mathbb {E}}[\log (\mathrm {Gamma^{-1}}(\theta ))] =-\psi (\theta ) \) and \(\psi \) is the digamma function.

4.3 Convergence to the Half-Space KPZ Equation

At high temperature (when the parameters of inverse gamma random variables go to infinity and space-time coordinates are rescaled appropriately), the partition function \({\mathcal {Z}}(n,m)\) converges to the multiplicative noise stochastic heat equation on \({\mathbb {R}}_{\geqslant 0}\) with Robin type boundary condition [66].

Although the convergence of discrete directed polymers to half-space KPZ equation was proved rigorously in [66] (based on the full-space analogous result in [99]), we will rederive (heuristically) this convergence in order to adapt it to our units and initial condition (which is not covered in [66]). Let us change coordinates and use more natural time and space coordinates \(\uptau = n+m-2\) and \(\varkappa = n-m\). The partition function \(Z_d(\varkappa , \uptau ) := {\mathcal {Z}}(n,m)\) satisfies the discrete version of the stochastic heat equation

$$\begin{aligned} Z_d(\varkappa ,\uptau ) = w_{\varkappa ,\uptau } (Z_d(\varkappa -1,\uptau -1)+Z_d(\varkappa +1,\uptau -1)), \varkappa >0 \end{aligned}$$
(4.8)

where \(w_{\varkappa ,\uptau }\sim \mathrm {Gamma}^{-1}(\gamma _{\varkappa , \uptau })\) with parameter \(\gamma _{\varkappa ,\uptau }=\alpha _n+\beta _m\) (independent for each \(\varkappa ,\uptau \)). The boundary condition at \(\varkappa =0\) is given by

$$\begin{aligned} Z_d(0,\uptau ) = w_{0,\uptau } Z_d(1, \uptau -1). \end{aligned}$$
(4.9)

Let us renormalize \(Z_d\) and define \(Z_r(\varkappa , \uptau ) = C^{-\uptau } Z_d(\varkappa , \uptau )\). The correct factor to use is such that \(C^{\uptau }\) behaves asymptotically as the point to line partition function where weights would be replaced by their average. Hence we set \(C=2{\mathbb {E}} [w_{\varkappa , \uptau }]\). Note that in the following, we will choose parameters so that \({\mathbb {E}} [w_{\varkappa , \uptau }]\) does not depend on \(\uptau , \varkappa \) (except along the lines \(\uptau =\varkappa \) or \(\varkappa =0\)). We may rewrite (4.8) as

$$\begin{aligned} \nabla _{\uptau } Z_r(\varkappa , \uptau ) = \tfrac{1+b_{\varkappa , \uptau }}{2} \Delta _\varkappa Z_r(\varkappa , \uptau -1) +b_{\varkappa , \uptau } Z_r(\varkappa , \uptau -1), \end{aligned}$$
(4.10)

where \(b_{\varkappa , \uptau }= \frac{2 w_{\varkappa ,\uptau }}{C}-1 \), \(\nabla _{\uptau }\) is the discrete time derivative and \( \Delta _\varkappa \) is the discrete Laplacian.

Let us fix \(\alpha _{\circ }\in {\mathbb {R}}\), \(\alpha _1\in {\mathbb {R}}\) and set \(\alpha _i = 1/2 + \sqrt{n}/2\) for all \(i\geqslant 2\) and use the scalings

$$\begin{aligned} \uptau =n t/2, \quad \varkappa =\sqrt{n}x/2. \end{aligned}$$
(4.11)

In this case, we may choose \(C = 2/\sqrt{n}\) and the family of random variables \(w_{\varkappa ,\uptau }\) rescales to a white noise in the sense that \(n\, b_{\varkappa , \uptau } \Rightarrow \sqrt{2} \xi (x,t)\).

At \(\uptau = \varkappa \), we have that for large \(\varkappa \),

$$\begin{aligned} Z_r(\varkappa , \varkappa ) = \mathrm {Gamma}^{-1}(\alpha _{\circ }+\alpha _1) \times e^{{\mathcal {B}}(x) -\alpha _1 x } + o\left( \frac{1}{n}\right) , \end{aligned}$$
(4.12)

where the inverse Gamma random variable (coming from \(w_{1,1}\)) and the Brownian motion \({\mathcal {B}}(x)\) are independent.

It is then natural to define the continuous limit

$$\begin{aligned} Z_\infty (x,t) = \lim _{n\rightarrow \infty } Z_r(\varkappa , \uptau ), \end{aligned}$$
(4.13)

so that \(Z_{\infty }\) has the initial data \( Z_\infty (x,0) = \mathrm {Gamma}^{-1}(\alpha _{\circ }+\alpha _1) \times e^{{\mathcal {B}}(x) -\alpha _1 x }. \) Under the scalings that we consider, the boundary condition (4.9) becomes

$$\begin{aligned} Z_{\infty }(0,t) \approx \frac{w}{C}\, Z_{\infty }\left( \frac{2}{\sqrt{n}},t\right) . \end{aligned}$$
(4.14)

Let us take the average on both sides of (4.14). We use that \({\mathbb {E}}[\frac{w}{C}]= \frac{1}{1+\frac{2\alpha _{\circ }-1}{\sqrt{n}}},\) and consider that the weight w is independent from \(Z_{\infty }(\frac{2}{\sqrt{n}},t)\), as this is true in (4.9). We obtain

$$\begin{aligned} {\mathbb {E}}\left[ Z_{\infty }(0,t) \right] = \left( 1-\frac{2}{\sqrt{n} } (\alpha _{\circ }-1/2) \right) {\mathbb {E}} \left[ Z_{\infty }\left( \frac{2}{\sqrt{n}},t\right) \right] + o(1/\sqrt{n}), \end{aligned}$$
(4.15)

which, by Taylor approximation, leads to

$$\begin{aligned} \partial _x {\mathbb {E}}\left[ Z_\infty (x,t)\right] \Big \vert _{x=0} = (\alpha _{\circ }-1/2){\mathbb {E}}\left[ Z_{\infty }(0,t)\right] . \end{aligned}$$
(4.16)

Note that one may also obtain the more general boundary condition (2.7) for mixed moments by multiplying both sides of (4.14) by \(Z_{\infty }(x_2,t)\dots Z_{\infty }(x_n,t)\) before taking the average. Finally, multiplying (4.10) by n we obtain, when taking formally the \(n\rightarrow \infty \) limit, that \(Z_{\infty }(x,t)\) should satisfy the SHE (2.2). Thus, we have arrived at the following.

Claim 4.6

(Combining [66] and [65].) Let \({\mathcal {Z}}(n,m)\) be the partition function of the log-gamma polymer (see Definition 4.1) where \(\alpha _2=\alpha _3= \dots = \frac{\sqrt{n}}{2}+\frac{1}{2}\), \(\alpha _{\circ }= A+\frac{1}{2}\) and \(\alpha _1 = B+\frac{1}{2}\). Let Z(xt) be the solution of the multiplicative noise stochastic heat equation from Definition 2.1 with boundary parameter A and initial drift \(-1/2-B\). Fix \(t>0,x\geqslant 0\). Then the family of random variables

$$\begin{aligned} \left\{ \frac{ {\mathcal {Z}} \left( \frac{nt +\sqrt{n}x}{4} , \frac{nt -\sqrt{n}x}{4} \right) }{\left( \frac{2}{\sqrt{n}}\right) ^{nt/2-2}} \right\} _{t>0,x \geqslant 0} \end{aligned}$$
(4.17)

converges in distribution to \(\mathrm {Gamma}^{-1}(A+B+1)\times Z(x,t)\) (in the space of continuous space-time trajectories), where the inverse Gamma random variable is independent from the process Z(xt). Moreover, the partition function \({\mathcal {Z}}^{\mathrm{stat}}(n,m)\) from Sect. 4.2 converges, under the exact same scalings, to Z(xt) (not multiplied by \(\mathrm {Gamma}^{-1}(A+B+1)\)).

Note that the derivation that we presented above is only heuristic. We will not provide a complete proof of this result, though we indicate where the needed arguments can be found. The convergence of the polymer partition function for the half-space log-gamma polymer to the multiplicative noise stochastic heat equation is proved in [66] using a chaos series representation of the polymer partition function, see in particular Section 5 therein. However, the setting of [66] restricts to delta initial data \( B=+\infty \) (and Robin type boundary with arbitrary parameter A). The convergence for Brownian initial data with arbitrary parameter B was proven in [65, Theorem 2.2], though [65] works only in the case of Dirichlet boundary condition, that is in the case \(A=+\infty \). Hence, one needs to combine the arguments from [66] and [65] to deduce this result.

Using the stationary structure from Sect. 4.2 together with Claim 4.6, we obtain the following. Let Z(xt) be as in Claim 4.6 and assume that \(A+B+1=0\). Then, for any time \(t>0\), Z(xt)/Z(0, t) is the exponential of a Brownian motion with drift \(-B-1/2\).

We may also compute the expectation of \(h(x,t) = \log Z(x,t)\) in the stationary case when \(A+B+1=0\). Using (4.7) and plugging there the scalings of Claim 4.6, we obtain that

$$\begin{aligned} {\mathbb {E}}\left[ h(x,t)\right] = -\frac{t}{12} + \left( B+\frac{1}{2}\right) ^2t-\left( B+\frac{1}{2}\right) x, \quad \quad A+B+1=0. \end{aligned}$$
(4.18)

In particular, when \(A=B=-1/2\), we have that \({\mathbb {E}}\left[ h(0,t)\right] =-t/12\).

4.4 Moments of the Half-Space KPZ Equation with Brownian Initial Data

Using the moment formula from Proposition 4.2 and the convergence result from Claim 4.6, we obtain the following moment formula for the half-space stochastic heat equation Z(xt). Note that the formula is valid for any \(x\geqslant 0\).

Claim 4.7

Let Z(xt) be the solution to the half-space stochastic heat equation (Definition 2.1) with Brownian initial data with drift \(-1/2-B\) and boundary parameter A. Assume that \(B>0\), and \(A+B>k-1\). Then, we have

$$\begin{aligned}&{\mathbb {E}}[Z(x,t)^k] \nonumber \\&\quad = 2^k \frac{\Gamma (A+B+1)}{\Gamma (A+B+1-k)} \int _{r_1+\mathbf{i }{\mathbb {R}}}\frac{{\mathrm {d}}z_1}{2\mathbf{i }\pi }\cdots \int _{r_k+\mathbf{i }{\mathbb {R}}}\frac{{\mathrm {d}}z_k}{2\mathbf{i }\pi } \prod _{1\leqslant a<b\leqslant k} \frac{z_a-z_b}{z_a-z_b-1}\, \frac{z_a+z_b}{z_a+z_b-1}\nonumber \\&\qquad \times \prod _{i=1}^k \frac{z_i}{z_i+A} \frac{1}{B^2-z_i^2} e^{tz_i^2 - xz_i}, \end{aligned}$$
(4.19)

where the contours are chosen so that \(B>r_1>r_2+1>\dots ,>r_k+k-1>k-1-A\).

Remark 4.8

One may also compute mixed moments of Z(xt), that is \({\mathbb {E}} [Z(x_1,t)\dots Z(x_k,t)]\).

Proof

We start from the moment formula given in Proposition 4.2. Under the scalings considered in Claim 4.6, the second line of (4.1) becomes

$$\begin{aligned} \prod _{i=1}^k \frac{2z_i}{z_i-\alpha _{\circ }+1/2} \frac{1}{(\alpha _1-1/2)^2-z_i^2} \left( \frac{1}{\sqrt{n}/2 -z_i } \right) ^{\frac{nt+\sqrt{n}x}{4}-1 } \left( \frac{1}{\sqrt{n}/2 +z_i} \right) ^{\frac{nt-\sqrt{n}x}{4}-1 }. \end{aligned}$$

Using dominated convergence, one readily obtains that

$$\begin{aligned}&\lim _{n\rightarrow \infty }{\mathbb {E}} \left( \frac{ {\mathcal {Z}}\left( \frac{nt +\sqrt{n}x}{4} , \frac{nt -\sqrt{n}x}{4} \right) }{\left( \frac{2}{\sqrt{n}}\right) ^{nt/2-2}} \right) ^k = \int _{r_1+\mathbf{i }{\mathbb {R}}}\frac{{\mathrm {d}}z_1}{2\mathbf{i }\pi }\cdots \int _{r_k+\mathbf{i }{\mathbb {R}}}\frac{{\mathrm {d}}z_k}{2\mathbf{i }\pi } \nonumber \\&\quad \prod _{1\leqslant a<b\leqslant k} \frac{z_a-z_b}{z_a-z_b-1}\, \frac{z_a+z_b}{1+z_a+z_b} \prod _{i=1}^k \frac{2z_i}{z_i-A} \frac{1}{B^2-z_i^2} e^{tz_i^2 + xz_i}. \end{aligned}$$
(4.20)

Using the convergence in distribution from Claim 4.6 the left hand side in (4.20) converges to \(m_k {\mathbb {E}}[Z(x,t)^k]\), where \(m_k\) is the k-th moment of an inverse Gamma random variable with parameter \(\alpha _{\circ }+\alpha _1\) (the convergence in distribution implies the convergence of moments modulo some tail bounds, which, using Markov inequality, can be proven from our explicit moment formulae. Details of this argument are provided in a similar case in [100, Sect. 3]). It is well known that \(m_k =\Gamma (\alpha _{\circ }+\alpha _1-k)/ \Gamma (\alpha _{\circ }+\alpha _1)\), so that

$$\begin{aligned} {\mathbb {E}}[Z(x,t)^k] = \frac{\Gamma (A+B+1)}{\Gamma (A+B+1-k)} \times \text {R.H.S. of } (4.20) \end{aligned}$$
(4.21)

Finally, we have used the change of variables \({\tilde{z}}_i=-z_{k-i+1}\) to obtain the statement of the Proposition. \(\square \)

4.5 Symmetry Between Drift and Boundary Parameters

We now exploit one of the symmetries of the log-gamma polymer, arising from more general symmetries of so-called half-space Macdonald processes [51], which, in the KPZ limit, extends a result of Parekh [65].

Claim 4.9

Let us denote here by \(Z_A^{B}(x,t)\) (with explicit dependence in the parameters AB) the solution to the multiplicative noise SHE with boundary parameter A and initial drift \(-1/2-B\) (Definition 2.1). For any fixed \(t>0\) and \(A,B\in {\mathbb {R}}\), we have the equality in distribution

$$\begin{aligned} Z_A^{B}(0,t) = Z_{B}^{A}(0,t). \end{aligned}$$
(4.22)

Proof

This is a consequence of [51, Proposition 8.1] which states that the law of the partition function \({\mathcal {Z}}(n,n)\) of the half-space log-gamma polymer is invariant under exchanging parameters \(\alpha _{\circ }\) and \(\alpha _1\) (recall Definition 4.1), along with the convergence result from Claim 4.6.

However, [51, Proposition 8.1] assumes that \(\alpha _{\circ }+\alpha _1>0\) as in Definition 4.1, which would require the condition \(A+B+1>0\). Let us explain why we do not need to assume this condition. Recall Definition 4.1 and let us denote the partition function by \({\mathcal {Z}}_{\alpha _{\circ }}^{\alpha _1}(n,n)\), where we indicate explicitly the dependence on parameters \(\alpha _{\circ }, \alpha _1\). The result from [51, Proposition 8.1] implies that for \({{\tilde{\alpha _{\circ }}}},{{\tilde{\alpha }}}_1\) such that \({{\tilde{\alpha _{\circ }}}}+{{\tilde{\alpha }}}_1>0\) we have the equality in distribution

$$\begin{aligned} {\mathcal {Z}}_{{{\tilde{\alpha _{\circ }}}}}^{{{\tilde{\alpha }}}_1}(n,n) = {\mathcal {Z}}_{{{\tilde{\alpha }}}_1}^{{{\tilde{\alpha _{\circ }}}}}(n,n). \end{aligned}$$

Notice that by Definition 4.1, we have

$$\begin{aligned} {\mathcal {Z}}_{{{\tilde{\alpha _{\circ }}}}}^{{{\tilde{\alpha }}}_1}(n,n) = w_{1,1} Z_{{{\tilde{\alpha _{\circ }}}}}^{{{\tilde{\alpha }}}_1, \mathrm {stat}}, \;\;\;{\mathcal {Z}}_{{{\tilde{\alpha }}}_1}^{{{\tilde{\alpha _{\circ }}}}}(n,n) = w_{1,1} Z_{{{\tilde{\alpha }}}_1}^{{{\tilde{\alpha _{\circ }}}} , \mathrm {stat}}, \end{aligned}$$

where \({\mathcal {Z}}_{\alpha _{\circ }}^{\alpha _1, \mathrm {stat}}\) is defined as in Sect. 4.2 and \(w_{1,1}\) has the same distribution in both cases. This implies that we have also the equality in distribution

$$\begin{aligned} {\mathcal {Z}}_{{{\tilde{\alpha _{\circ }}}}}^{{{\tilde{\alpha }}}_1 , \mathrm {stat}}(n,n) = {\mathcal {Z}}_{{{\tilde{\alpha }}}_1}^{{{\tilde{\alpha _{\circ }}}} , \mathrm {stat}}(n,n). \end{aligned}$$
(4.23)

The distribution of the random variable in (4.23) depends on parameters \({{\tilde{\alpha _{\circ }}}}, {{\tilde{\alpha }}}_1, \alpha _2, \dots , \alpha _n\). The equality in distribution can be analytically extended to all parameters such that \(\alpha _i+\alpha _j>0\) for any \(2\leqslant i<j\leqslant n\), \({{\tilde{\alpha }}}_1+\alpha _i>0\) for any \(2\leqslant i\leqslant n\), and \({{\tilde{\alpha _{\circ }}}}+\alpha _i>0\) for any \(2\leqslant i\leqslant n\). In particular, we do not require anymore that \({{\tilde{\alpha _{\circ }}}}+{{\tilde{\alpha }}}_1>0\). Passing to the limit in (4.23) using Claim 4.6, we obtain the desired result. \(\square \)

4.6 Another Conjectural Identity in Law

The symmetry between parameters A and B stated in Claim 4.9 relies on a similar property for the log-gamma polymer (symmetry between \(\alpha _{\circ }\) and \(\alpha _1\)) based on the theory of half-space Macdonald processes ans stated as [51, Proposition 8.1]. This result was stated in [51] for the log-gamma polymer model where \(\alpha _2 = \dots = \alpha _n\). However, the same property actually holds for general half-space Macdonald process and for any choice of parameters, see [51, Proposition 2.6], as long as we restrict to the partition function on the boundary. Furthermore, the law of \({\mathcal {Z}}(n,n)\) is symmetric with respect to permutation of the parameters \(\alpha _i\). Thus, we claim that one can also exchange the roles of the parameter \(\alpha _{\circ }\) and the parameter \(\alpha _2\).

When scaling parameters to the (multiplicative noise) stochastic heat equation in Claim 4.6, we set \(\alpha _{\circ }=A-\frac{1}{2}\), \(\alpha _1=B-\frac{1}{2}\) and \(\alpha _i=\frac{\sqrt{n}}{2}\). If we exchange parameters \(\alpha _{\circ }\) and \(\alpha _2\), we expect that we will obtain the stochastic heat equation with boundary parameter equal to \(+\infty \) (i.e. with Dirichlet boundary condition \(Z(0,t)=0\)) and initial condition given by

$$\begin{aligned} Z(x,0) = \int _{0}^x \exp \left( {\mathcal {B}}_1(y) + {\mathcal {B}}_2(x)-{\mathcal {B}}_2(y) \right) {{\mathrm {d}}} y , \end{aligned}$$
(4.24)

where \({\mathcal {B}}_1\) and \({\mathcal {B}}_2\) are independent Brownian motions with respective drifts \(-(B+1/2)\) and \(-(A+1/2)\). Let us call \(Z_{\mathrm{Dir}}^{A,B}\) the solution to the heat equation with Dirichlet boundary condition and the initial condition given above in (4.24). We refer to [64] regarding the exact meaning of the Dirichlet boundary condition in this context. Following similar arguments as in the proof of [64, Theorem 1.1], we conjecture that for any \(t>0\), we have the identity in distribution

$$\begin{aligned} Z_A^B(0,t) = \lim _{x\rightarrow 0} \frac{Z_{\mathrm{Dir}}^{A,B}(x,t)}{x}. \end{aligned}$$
(4.25)

The identity in law (4.25) allows to predict the law of large numbers for \(h(0,t)= \log Z_A^B(0,t)\) in the bound phase. Recall that we expect that h(0, t) follows the asymptotics

$$\begin{aligned} h(0,t) \simeq v^{A,B}_{\infty } t + t^{\beta } \chi \end{aligned}$$
(4.26)

where \(\chi \) is an \({\mathcal {O}}(1)\) random variable, and \(\beta \) the growth fluctuation exponent. Using (4.25), \(\log \frac{Z_{\mathrm{Dir}}^{A,B}(x,t)}{x}\) should also follow the same asymptotics. When \(A<-1/2\) or \(B<-1/2\), we see that \(Z_{\mathrm{Dir}}^{A,B}(x,0)\) grows as \(e^{\max \lbrace |A+\frac{1}{2}| , |B+\frac{1}{2} | \rbrace x}\). Hence, the polymer partition function will be dominated by paths from (x, 0) to (0, t) where x is of order t. More precisely, since the point to point free energy from (x, 0) to (0, t) behaves asymptotically as \(-\frac{t}{12}- \frac{x^2}{4t}\), the partition function will be dominated by paths leaving from \(x=x^*_t\), where \(x^*_t = {\mathrm{argmax}}_{x>0}( - \frac{x^2}{4t} + \max \lbrace A+\frac{1}{2} , B+\frac{1}{2} \rbrace x) =2 \max \lbrace \vert A+\frac{1}{2} \vert , \vert B+\frac{1}{2} \vert \rbrace t\). Thus, we have that for general AB,

$$\begin{aligned} v^{A,B}_{\infty } = - \frac{1}{12} + \Big (\min \big \lbrace A+\frac{1}{2} , B+\frac{1}{2}, 0\big \rbrace \Big )^2. \end{aligned}$$

We can even predict the fluctuation exponent \(\beta \) and the nature of fluctuations. When \(A<-1/2\) or \(B<-1/2\) with \(A\ne B\) (say \(A<B\) for simplicity), the initial condition for \(Z_{\mathrm{Dir}}^{A,B}(x,t)\) in (4.24) will essentially be the Brownian motion, i.e. \({\mathcal {B}}_2(x)\) with drift \(-(A+1/2)\). The fluctuations of the initial condition at the optimal point \(x^*_t\) are thus \(\approx {\mathcal {B}}_2(x^*_t)\) i.e. Gaussian on the scale \(t^{1/2}\), and they will dominate the fluctuations in the partition function, hence we find that \(\beta =1/2\) and \(\chi \) is Gaussian. The situation is completely similar when \(B<A\).

When \(A=B<-1/2\), the situation is a bit more delicate since we cannot approximate the initial condition (4.24) by a Brownian motion. Instead, we notice that (4.24) can be interpreted as the partition function in the O’Connell-Yor directed polymer model [101]. For large values of x, it behaves as

$$\begin{aligned} \log Z_{\mathrm{Dir}}^{A,B}(x,0)&= \log \int _{0}^x \exp \left( {\mathcal {B}}_1(y) + {\mathcal {B}}_2(x)-{\mathcal {B}}_2(y) \right) {{\mathrm {d}}} y \end{aligned}$$
(4.27)
$$\begin{aligned}&\approx \left| B+\frac{1}{2}\right| x + \sqrt{x} \max _{0\leqslant y\leqslant 1} \left\{ {\mathcal {B}}_1(y) + {\mathcal {B}}_2(1)-{\mathcal {B}}_2(y) \right\} . \end{aligned}$$
(4.28)

It was proved [102, 103] that the latter quantity behaves asymptotically as the largest eigenvalue of a \(2\times 2\) GUE matrix in the scale \(x^{1/2}\). We must now evaluate this quantity at the optimal point \(x=x^*_t = \vert 2 B+1\vert t\), hence we find that \(\beta =1/2\) and that \(\chi \) has the same distribution as the top eigenvalue of a \(2\times 2\) GUE matrix, discussed for instance in [101, Sect. 6].

4.7 Residue Expansion

In this section, we restrict to \(x=0\) and denote \(Z(0,t)=Z(t)\) as in the previous sections. The moment formula (4.19) is not convenient for asymptotic analysis because the contours are different for each variable, so that the complexity of the formula significantly increases with k. To overcome this issue, one has to deform the contours to all lie on a fixed vertical line and take into account the residues encountered during this contour deformation. This procedure was implemented in [55], but the computation of residues is very involved and the result relies on a conjectural combinatorial simplification. Applying this result [55, Conjecture 5.2] to the moment formula (4.19), we conjecture that for \(A>k-1\) and \(B>k-1\), we have

$$\begin{aligned} {\mathbb {E}}[Z(t)^k]= & {} 2^k \frac{\Gamma (A+B+1)}{\Gamma (A+B+1-k)} \sum _{\underset{\lambda =1^{m_1}2^{m_2}\dots }{\lambda \vdash k}} \frac{(-1)^{\ell (\lambda )}}{m_1!m_2!\dots } \int _{\mathbf{i }{\mathbb {R}}} \frac{{\mathrm {d}}w_1}{2\mathbf{i }\pi } \dots \int _{\mathbf{i }{\mathbb {R}}} \frac{{\mathrm {d}}w_{\ell (\lambda )}}{2\mathbf{i }\pi }\nonumber \\&\times \prod _{j=1}^{\ell (\lambda )} \frac{(w_j+1/2)_{\lambda _j-1}}{4(w_j)_{\lambda _j}} {\mathrm{Pf}}\left[ \frac{u_i-u_j}{u_i+u_j} \right] _{i,j=1}^{2\ell (\lambda )} \nonumber \\&\times E(w_1, w_1+1, \dots , w_1+\lambda _1-1, \nonumber \\&\quad w_2, \dots , w_2+\lambda _2-1, \dots , w_{\ell (\lambda )}, \dots ,w_{\ell (\lambda )} + \lambda _{\ell (\lambda )} -1), \end{aligned}$$
(4.29)

where we use the Pochhammer notation for rising factorials \((w)_{\lambda } = w(w+1)\dots (w+\lambda -1)\), we define variables \(u_i\) for \(1\leqslant i\leqslant 2\ell (\lambda )\) as

$$\begin{aligned}&(u_1, \dots , u_{2\ell (\lambda )})\nonumber \\&= (-w_1+ \tfrac{1}{2}, w_1 - \tfrac{1}{2} + \lambda _1, -w_2+ \tfrac{1}{2}, w_2 - \tfrac{1}{2} + \lambda _2,\dots , -w_{\ell (\lambda )}+ \tfrac{1}{2}, w_{\ell (\lambda )} - \tfrac{1}{2} + \lambda _{\ell (\lambda )}),\nonumber \\ \end{aligned}$$
(4.30)

and where

$$\begin{aligned} E(z_1, \dots , z_k) = \prod _{i=1}^k \frac{e^{tz_i^2} }{B^2-z_i^2} \sum _{\sigma \in BC_k} \sigma \left( \prod _{1\leqslant j<i\leqslant k} \frac{z_i-z_j-1}{z_i-z_j} \frac{z_i+z_j-1}{z_i+z_j} \prod _{i=1}^k \frac{z_i}{z_i+A} \right) . \end{aligned}$$
(4.31)

It turns out that the symmetrization can be performed using [55, Equation (54)], relying on the theory of BC-symmetric Hall-Littlewood polynomials [104], and one finds

$$\begin{aligned} E(z_1, \dots , z_k) = 2^k k! \prod _{i=1}^k \frac{e^{tz_i^2} }{B^2-z_i^2} \frac{z_i^2}{z_i^2-A^2}. \end{aligned}$$
(4.32)

Remark 4.10

It is now apparent that the moment formulae are invariant with respect to the transformation \((A,B)\mapsto (B,A)\).

Performing explicitly the evaluation into strings, we obtain

$$\begin{aligned} {\mathbb {E}}[Z(t)^k]&= 4^k k! \frac{\Gamma (A+B+1)}{\Gamma (A+B+1-k)} \sum _{\underset{\lambda =1^{m_1}2^{m_2}\dots }{\lambda \vdash k}} \frac{(-1)^{\ell (\lambda )}}{m_1!m_2!\dots } \int _{\mathbf{i }{\mathbb {R}}} \frac{{\mathrm {d}}w_1}{2\mathbf{i }\pi } \dots \int _{\mathbf{i }{\mathbb {R}}} \frac{{\mathrm {d}}w_{\ell (\lambda )}}{2\mathbf{i }\pi }\nonumber \\&\quad \times {\mathrm{Pf}}\left[ \frac{u_i-u_j}{u_i+u_j} \right] _{i,j=1}^{2\ell (\lambda )} \prod _{j=1}^{\ell (\lambda )} \frac{e^{t \mathtt {G}(w_i+\lambda _i)}}{e^{t {\mathtt {G}}(w_i)}} \frac{(w_j+1/2)_{\lambda _j-1}}{4(w_j)_{\lambda _j}} \left( \frac{\Gamma (w_i+\lambda _i)}{\Gamma (w_i)} \right) ^2\nonumber \\&\quad \times \frac{\Gamma (B-w_j-\lambda _j+1)\Gamma (B+w_j)}{\Gamma (B-w_j+1)\Gamma (B+w_j+\lambda _j)}\frac{\Gamma (w_j-A)\Gamma (w_j+A)}{\Gamma (w_j+\lambda _j-A)\Gamma (w_j+\lambda _j+A)}, \end{aligned}$$
(4.33)

where

$$\begin{aligned} \mathtt {G}(w) = \frac{w^3}{3}-\frac{w^2}{2}+\frac{w}{6}, \end{aligned}$$

so that

$$\begin{aligned} \mathtt {G}(w+\ell )-\mathtt {G}(w) = w^2+(w+1)^2 + \dots + (w+\ell -1)^2. \end{aligned}$$
(4.34)

Note that \(B^2-z^2 = (\pm B-z)(\pm B+z)\) and \(z^2-A^2 = - (\pm A-z)(\pm A+z)\), so that many choices are possible for the evaluation of \(E(z_1, \dots , z_k)\) into strings. The most convenient choice seems to be the following formula.

Claim 4.11

(based on [55, Conjecture 5.2]) For \(A>k-1\) and \(B>k-1\), we have

$$\begin{aligned} {\mathbb {E}}[Z(t)^k]= & {} 4^k k! \frac{\Gamma (A+B+1)}{\Gamma (A+B+1-k)} \sum _{\underset{\lambda =1^{m_1}2^{m_2}\dots }{\lambda \vdash k}} \frac{(-1)^{\ell (\lambda )}}{m_1!m_2!\dots } \int _{\mathbf{i }{\mathbb {R}}} \frac{{\mathrm {d}}w_1}{2\mathbf{i }\pi } \dots \int _{\mathbf{i }{\mathbb {R}}} \frac{{\mathrm {d}}w_{\ell (\lambda )}}{2\mathbf{i }\pi }\nonumber \\&\times {\mathrm{Pf}}\left[ \frac{u_i-u_j}{u_i+u_j} \right] _{i,j=1}^{2\ell (\lambda )} \prod _{j=1}^{\ell (\lambda )} \frac{e^{t \mathtt {G}(w_i+\lambda _i)}}{e^{t \mathtt {G}(w_i)}} \frac{(w_j+1/2)_{\lambda _j-1}}{4(w_j)_{\lambda _j}} \frac{\Gamma (-w_j+1)\Gamma (w_j+\lambda _j)}{\Gamma (-w_j-\lambda _j+1)\Gamma (w_j)} \nonumber \\&\times \frac{\Gamma (B-w_j-\lambda _j+1)\Gamma (B+w_j)}{\Gamma (B-w_j+1)\Gamma (B+w_j+\lambda _j)}\frac{\Gamma (A-w_j-\lambda _j+1)\Gamma (A+w_j)}{\Gamma (A-w_j+1)\Gamma (A+w_j+\lambda _j)}. \end{aligned}$$
(4.35)

Comparing with the formula (3.27) obtained from the replica Bethe ansatz, we see that (4.35) and (3.27) agree after the substitutions

$$\begin{aligned} w_j \rightarrow \mathbf{i }k_j + \frac{1-m_j}{2}, \quad \quad \ell (\lambda ) \rightarrow n_s, \quad \quad \lambda _j \rightarrow m_j. \end{aligned}$$
(4.36)

Indeed, under this change of variables, we have

$$\begin{aligned} (u_1,u_2,\dots ,u_{2\ell (\lambda )-1} u_{2\ell (\lambda )}) = (X_2,X_1, \dots , X_{2n_s}, X_{2n_s-1}) \end{aligned}$$
(4.37)

where \(X_j = m_p + 2 \mathbf{i }k_p\)for \(j=2 p -1\), \(X_j= m_p - 2 \mathbf{i }k_p\) for \(j=2 p\). Thus we have that

$$\begin{aligned} (-1)^{\ell (\lambda )} {\mathrm{Pf}}\left[ \frac{u_i-u_j}{u_i+u_j} \right] _{i,j=1}^{2\ell (\lambda )} = \underset{2 n_s \times 2 n_s}{\mathrm{Pf}} \left[ \frac{X_i-X_j}{X_i+X_j} \right] . \end{aligned}$$
(4.38)

One may also check that \(e^{\mathtt {G}(w_i+\lambda _i)-\mathtt {G}(w_i)} = e^{ (m_j^3-m_j) \frac{ t}{12} - m_j k_j^2 t }.\) Using the reflection formula for the Gamma function, we may write the hyperbolic sine in the definition of \(B_{k,m}\) in (3.20) as a product of Gamma functions so that we have (dropping indices of variables \(k_j,m_j,\lambda _j, w_j\))

$$\begin{aligned} \frac{B_{k,m}}{4 \mathbf{i }k} = \frac{1}{2}\frac{\Gamma (2w-2\lambda -1)\Gamma (1-2w)}{\Gamma (2w+\lambda )\Gamma (1-2w-\lambda )}\frac{\Gamma (B-w-\lambda +1)\Gamma (B+w)}{\Gamma (B-w+1)\Gamma (B+w+\lambda )}\frac{\Gamma (A-w-\lambda +1)\Gamma (A+w)}{\Gamma (A-w+1)\Gamma (A+w+\lambda )}. \end{aligned}$$
(4.39)

Using the duplication formula for the Gamma function and the reflection formula a few times, we arrive at

$$\begin{aligned} \frac{B_{k,m}}{4 \mathbf{i }k}= & {} 2^{2\lambda } \frac{(w+1/2)_{\lambda -1}}{4(w)_{\lambda }} \frac{\Gamma (-w+1)\Gamma (w+\lambda )}{\Gamma (-w-\lambda +1)\Gamma (w)} \nonumber \\&\times \frac{\Gamma (B-w-\lambda +1)\Gamma (B+w)}{\Gamma (B-w+1)\Gamma (B+w+\lambda )}\frac{\Gamma (A-w-\lambda +1)\Gamma (A+w)}{\Gamma (A-w+1)\Gamma (A+w+\lambda )}. \end{aligned}$$
(4.40)

Thus, we have shown that (4.35) and (3.27) agree as claimed.

5 Generating Function in Terms of a Fredholm Pfaffian

We will now write the moment generating function of Z(t). We define, for \(\varsigma >0\),

$$\begin{aligned} g(\varsigma ) ={\mathbb {E}} \left[ \exp (- \varsigma e^{\frac{t}{12}} WZ(t)) \right] . \end{aligned}$$
(5.1)

Ignoring the fact that the summation over n cannot be exchanged with the expectation due to the divergence of moments, we will consider the following formal power series

$$\begin{aligned} 1 + \sum _{n=1}^\infty \frac{(- \varsigma e^{\frac{t}{12}})^n}{n!} {\mathbb {E}} \left[ W^n Z(t)^n\right] , \end{aligned}$$

that we will again denote by \(g(\varsigma )\).

Remark 5.1

Following [97], we may informally write

$$\begin{aligned} {\mathbb {E}}[W^n]\varsigma ^n =\frac{\Gamma (A+B-n+1)}{\Gamma (A+B+1)}\varsigma ^n=\frac{\Gamma (A+B-\varsigma \partial _{\varsigma }+1)}{\Gamma (A+B+1)}\varsigma ^n \end{aligned}$$
(5.2)

so that

$$\begin{aligned} {\mathbb {E}} \left[ \exp (- \varsigma e^{\frac{t}{12}} Z(t)) \right] =\frac{\Gamma (A+B+1)}{\Gamma (A+B+1-\varsigma \partial _{\varsigma })} {\mathbb {E}} \left[ \exp (- \varsigma e^{\frac{t}{12}} WZ(t)) \right] , \end{aligned}$$
(5.3)

where the operator \(\frac{\Gamma (A+B+1)}{\Gamma (A+B+1-\varsigma \partial _{\varsigma })}\) should be understood in the following sense: If

$$\begin{aligned} \frac{\Gamma (A+B+1)}{\Gamma (A+B+1-z)} = \sum _{n=0}^{+\infty }a_nz^n, \end{aligned}$$

in a neighborhood of \(z=0\), then

$$\begin{aligned} \frac{\Gamma (A+B+1)}{\Gamma (A+B+1-\varsigma \partial _{\varsigma })} = \sum _{n=0}^{+\infty }a_n(\varsigma \partial _{\varsigma })^n. \end{aligned}$$

Equation (5.3) is a positive temperature analogue of [105, Eq. (4.5)] which we will use in the next section. This type of arguments can be traced back to the work of Baik and Rains [58].

Remark 5.2

Alternatively, one may use the density of the inverse Gamma distribution and write

$$\begin{aligned} g(\varsigma ) = \int _{0}^{+\infty } \frac{{{\mathrm {d}}} u}{\Gamma (A+B+1)}u^{-A-B-2}e^{-1/u}{\mathbb {E}}\left[ \exp (- \varsigma u e^{\frac{t}{12}} Z(t)) \right] . \end{aligned}$$
(5.4)

We may rewrite this equation using the following representation of the Bessel function

$$\begin{aligned} \int _{0}^{+\infty }{\mathrm {d}}u \, u^{-\nu } e^{1/u-x^2 u} = 2 x^{\nu } K_{\nu }(2 x), \end{aligned}$$
(5.5)

for \(x>0\), where \(K_{\nu }\) denotes the modified Bessel K function. Exchanging the expectation with the integral over u in (5.4), and using this integral representation, we arrive at

$$\begin{aligned} {\mathbb {E}}\left[ 2 \left( \varsigma Z(t)e^{\frac{t}{12}}\right) ^{\frac{1+A+B}{2}} K_{1+A+B}\left( 2\sqrt{\varsigma Z(t)e^{\frac{t}{12}}}\right) \right] = g(\varsigma )\Gamma (1+A+B), \end{aligned}$$
(5.6)

where \(K_{\nu }(z)\) denotes the modified Bessel K function. Note that similar integral transforms involving the modified Bessel function K appear in [31, Theorem 2.9]. It is plausible that this expression can be inverted to compute the distribution of Z(t). For \(A+B+1=0\), an inversion formula is provided in [31, Appendix E]. We will see in Sect. 6 that we will actually not need to perform this inversion.

The constraint \(\sum _{i=1}^{n_s} m_i=n\) in (3.27) can then be relaxed by reorganizing the series according to the number of strings:

$$\begin{aligned} g(\varsigma ) = 1 + \sum _{n_s=1}^\infty \frac{1}{n_s!} \mathsf{Z}(n_s,\varsigma ) \end{aligned}$$
(5.7)

where \(\mathsf{Z}(n_s,\varsigma )\) is the partition sum at fixed number of strings \(n_s\), calculated below. We now show that one can write the generating function as a Fredholm Pfaffian. It will be possible thanks to the Schur Pfaffian identity, (3.24), given above. The partition sum at fixed number of strings, expressed in terms of the reduced variables \(X_{2p-1} = m_p + 2 \mathbf{i }k_p \) and \(X_{2p} = m_p - 2 \mathbf{i }k_p\) for \(p\in [1,n_s]\), reads

$$\begin{aligned} \mathsf{Z}(n_s,\varsigma ) = \prod _{p=1}^{n_s} \sum _{m_p \geqslant 1} \int _{\mathbb {R}} \frac{{\mathrm {d}}k_p}{2 \pi } (-\varsigma )^{m_p} \frac{B_{k_p,m_p}}{4 \mathbf{i }k_p} e^{-t m_p k_p^2 + \frac{t}{12} m_p^3} \underset{2 n_s \times 2 n_s}{\mathrm{Pf}} \left[ \frac{X_i-X_j}{X_i+X_j}\right] \end{aligned}$$
(5.8)

where \(B_{k,m}\) was given in (3.20). The summation over the variables \(m_p\) can be done using the Mellin-Barnes summation trick similarly to Refs. [30, 97]. The barrier \(A>(n-1)/2\) is overcome exactly as in Ref. [97] (see Lemma. 6 and the discussion therein) from an analytic continuation of Gamma functions included in the \(B_{k,m}\) factor, the introduction of a particular contour \(C_0\) and a final requirement for the drift \(A+1/2>0\). Indeed, define the contour \(C_0 = a + \mathbf{i }{\mathbb {R}}\) with \(a \in (0,\min \lbrace 2B+1,2A+1,1\rbrace )\), then for any holomorphic function f having sufficient decay at infinity and in particular denoting the summand of Eq. (5.8) by the function \(f(m_p)\), we have

$$\begin{aligned} \sum _{m \geqslant 1} (-\varsigma )^m f(m) = - \int _{C_0} \frac{{\mathrm {d}}w}{2 \mathbf{i }\pi } \varsigma ^w \frac{\pi }{\sin \pi w} f(w) = - \int _{\mathbb {R}} {\mathrm {d}}r \frac{\varsigma }{ \varsigma +e^{-r}} \int _{C_0} \frac{{\mathrm {d}}w}{2 \mathbf{i }\pi } f(w) e^{-w r}. \end{aligned}$$
(5.9)

For each \(m_p\) we therefore introduce two variables \(r_p\) and \(w_p\) and we redefine the reduced variables \(X_{2p}\) and \(X_{2p-1}\) under the minimal replacement \(m_p \rightarrow w_p\) imposed by the Mellin-Barnes formula, which we will apply despite the presence of poles on the right of the contour \(C_0\). This is an a priori illegal step, but it will exactly turn the diverging moment generating series into a well-defined and converging series equal to the Laplace transform. We refer to [55, Sect. 6] where this procedure and its degree of rigor is discussed in great details. This leads to the following rewriting of the coefficient \(\mathsf{Z}(n_s,\varsigma )\) as (see section 5 in [54] for similar manipulations)

$$\begin{aligned} \begin{aligned} \mathsf{Z}(n_s,\varsigma ) =&(-1)^{n_s}\prod _{p=1}^{n_s} \int _{\mathbb {R}} {\mathrm {d}}r_p \frac{\varsigma }{\varsigma + e^{ -r_p }} \iint _{C_0^2} \frac{{\mathrm {d}}X_{2p-1}}{4\mathbf{i }\pi } \frac{{\mathrm {d}}X_{2p}}{4\mathbf{i }\pi } \frac{\sin (\frac{\pi }{2} (X_{2p}-X_{2p-1}))}{2\pi } \\&\times \frac{\Gamma (A+\frac{1}{2}-\frac{X_{2p}}{2})}{\Gamma (A+\frac{1}{2}+\frac{X_{2p}}{2})}\frac{\Gamma (A+\frac{1}{2}-\frac{X_{2p-1}}{2})}{\Gamma (A+\frac{1}{2}+\frac{X_{2p-1}}{2})} \frac{\Gamma (B+\frac{1}{2}-\frac{X_{2p}}{2})}{\Gamma (B+\frac{1}{2}+\frac{X_{2p}}{2})}\frac{\Gamma (B+\frac{1}{2}-\frac{X_{2p-1}}{2})}{\Gamma (B+\frac{1}{2}+\frac{X_{2p-1}}{2})} \\&\times \Gamma (X_{2p-1}) \Gamma (X_{2p}) e^{-(X_{2p-1}+X_{2p}) \frac{r_p}{2} + t ( \frac{X_{2p-1}^3}{24} + \frac{X_{2p}^3}{24} )} \underset{2 n_s \times 2 n_s}{\mathrm{Pf}} \left[ \frac{X_i-X_j}{X_i+X_j}\right] \end{aligned} \end{aligned}$$
(5.10)

Remark 5.3

Note that the contour \(C_0\) passes to the left of the poles of the Gamma function at \(X=2A+1, 2B+1\).

We observe that the integrals are almost separable in \(X_{2p}\) and \(X_{2p-1}\) except for the sine function which couples them. By anti-symmetrization and similarly to [54, Sect. 5], we can proceed to the replacementFootnote 2

$$\begin{aligned} \sin \left( \frac{\pi }{2} (X_{2p}-X_{2p-1})\right) \rightarrow 2\sin \left( \frac{\pi }{2} X_{2p}\right) \cos \left( \frac{\pi }{2} X_{2p-1}\right) . \end{aligned}$$
(5.11)

The last manipulations consist in rescaling all variables X by a factor 2 and replacing the contours of integration by \(C=\frac{a}{2} + i {\mathbb {R}}\). Hence we have

$$\begin{aligned} \begin{aligned} \mathsf{Z}(n_s,\varsigma ) =&(-1)^{n_s}\prod _{p=1}^{n_s} \int _{\mathbb {R}} {\mathrm {d}}r_p \frac{\varsigma }{\varsigma + e^{ -r_p }} \iint _{C^2} \frac{{\mathrm {d}}X_{2p-1}}{2\mathbf{i }\pi } \frac{{\mathrm {d}}X_{2p}}{2\mathbf{i }\pi } \frac{\sin (\pi X_{2p})\cos (\pi X_{2p-1})}{\pi } \\&\times \frac{\Gamma (A+\frac{1}{2}-X_{2p})}{\Gamma (A+\frac{1}{2}+X_{2p})}\frac{\Gamma (A+\frac{1}{2}-X_{2p-1})}{\Gamma (A+\frac{1}{2}+X_{2p-1})} \frac{\Gamma (B+\frac{1}{2}-X_{2p})}{\Gamma (B+\frac{1}{2}+X_{2p})}\frac{\Gamma (B+\frac{1}{2}-X_{2p-1})}{\Gamma (B+\frac{1}{2}+X_{2p-1})}\\&\times \Gamma (2 X_{2p-1}) \Gamma (2X_{2p}) e^{-(X_{2p-1}+X_{2p}) r_p + t ( \frac{X_{2p-1}^3}{3} + \frac{X_{2p}^3}{3} )} \underset{2 n_s \times 2 n_s}{\mathrm{Pf}} \left[ \frac{X_i-X_j}{X_i+X_j}\right] \end{aligned} \end{aligned}$$
(5.12)

There are a few last steps before we introduce the Fredholm Pfaffian. First define the functions

$$\begin{aligned} \begin{aligned}&\phi _{2p}(X)=\frac{\sin (\pi X)}{\pi }\Gamma (2X)\frac{\Gamma (A+\frac{1}{2}-X)}{\Gamma (A+\frac{1}{2}+X)}\frac{\Gamma (B+\frac{1}{2}-X)}{\Gamma (B+\frac{1}{2}+X)}e^{ -r_p X + t \frac{X^3}{3} }\\&\phi _{2p-1}(X)=\cos (\pi X)\Gamma (2X)\frac{\Gamma (A+\frac{1}{2}-X)}{\Gamma (A+\frac{1}{2}+X)}\frac{\Gamma (B+\frac{1}{2}-X)}{\Gamma (B+\frac{1}{2}+X)}e^{ -r_p X + t \frac{X^3}{3} } \end{aligned} \end{aligned}$$
(5.13)

Using a known property of Pfaffians (see De Bruijn [106]), we can rewrite the partition sum at fixed number of strings itself as a Pfaffian, i.e. we use that

$$\begin{aligned} \prod _{\ell =1}^{2n_s}\int _{C}\frac{{\mathrm {d}}X_{\ell }}{2\mathbf{i }\pi }\Phi _{\ell }(X_{\ell }) \underset{2 n_s \times 2 n_s}{\mathrm{Pf}} \left[ \frac{X_i-X_j}{X_i+X_j}\right] =\underset{2 n_s \times 2 n_s}{\mathrm{Pf}}\left[ \iint _{C^2} \frac{{\mathrm {d}}w}{2\mathbf{i }\pi }\frac{{\mathrm {d}}z}{2\mathbf{i }\pi } \Phi _i(w)\Phi _j(z)\frac{w-z}{w+z} \right] \end{aligned}$$
(5.14)

This leads to the definition of a \(2 n_s \times 2 n_s\) matrix M such that

$$\begin{aligned} M_{i j} =\iint _{C^2} \frac{{\mathrm {d}}w}{2\mathbf{i }\pi }\frac{{\mathrm {d}}z}{2\mathbf{i }\pi } \Phi _i(w)\Phi _j(z)\frac{w-z}{w+z} \end{aligned}$$
(5.15)

Since a variable \(r_p\) will be shared between four elements of this matrix, it is more convenient to view M as composed of \(2\times 2\) blocks which we denote K, whose elements are presented in Eqs. (2.12). Finally, the string-replicated partition function is given by an infinite series of Pfaffians

$$\begin{aligned} g(\varsigma )=1+\sum _{n_s=1}^\infty \frac{(-1)^{n_s}}{n_s!} \prod _{p=1}^{n_s} \int _{\mathbb {R}} {\mathrm {d}}r_p\frac{\varsigma }{\varsigma + e^{-r_p}} \underset{ n_s \times n_s}{\mathrm{Pf}}\left( K(r_k,r_{\ell })\right) \end{aligned}$$
(5.16)

This series is a Fredholm Pfaffian,

$$\begin{aligned} g(\varsigma )={\mathbb {E}} \left[ \exp (-\varsigma We^{H(t)}) \right] ={\mathrm{Pf}}(J-\sigma _\varsigma K)_{{\mathbb {L}}^2({\mathbb {R}})}, \end{aligned}$$
(5.17)

where K is given in (2.12), the function \(\sigma _\varsigma \) is given by \(\sigma _\varsigma (r)=\frac{\varsigma }{\varsigma +e^{-r}}\) and the \(2\times 2\) kernel J is given by \(J(r,r')=\bigg (\begin{array}{cc} 0 &{} 1 \\ -1 &{} 0 \end{array} \bigg )\mathbb {1}_{r=r'}\). For the precise definition and properties of Fredholm Pfaffians see Section 8 in [107], as well as e.g. Section 2.2. in [44], Appendix B in [108] and Appendix G in [24, 25].

6 Large Time Limit of the Fredholm Pfaffian and the Distribution of the KPZ Height: Crossover Kernel

We will now study the large time limit of our kernel. To understand the scaling required at large time, let us recall the expression of the partition sum at fixed number of strings

$$\begin{aligned} \begin{aligned} \mathsf{Z}(n_s,\varsigma )&= (-1)^{n_s}\prod _{p=1}^{n_s} \int _{\mathbb {R}} {\mathrm {d}}r_p \frac{\varsigma }{\varsigma + e^{ -r_p }} \iint _{C^2} \frac{{\mathrm {d}}X_{2p-1}}{2\mathbf{i }\pi } \frac{{\mathrm {d}}X_{2p}}{2\mathbf{i }\pi } \frac{\sin (\pi X_{2p})\cos (\pi X_{2p-1})}{\pi } \\&\quad \times \frac{\Gamma (A+\frac{1}{2}-X_{2p})}{\Gamma (A+\frac{1}{2}+X_{2p})}\frac{\Gamma (A+\frac{1}{2}-X_{2p-1})}{\Gamma (A+\frac{1}{2}+X_{2p-1})} \frac{\Gamma (B+\frac{1}{2}-X_{2p})}{\Gamma (B+\frac{1}{2}+X_{2p})}\frac{\Gamma (B+\frac{1}{2}-X_{2p-1})}{\Gamma (B+\frac{1}{2}+X_{2p-1})}\\&\quad \times \Gamma (2 X_{2p-1}) \Gamma (2X_{2p}) e^{-(X_{2p-1}+X_{2p}) r_p + \frac{t}{3} ( X_{2p-1}^3 + X_{2p}^3 )} \underset{2 n_s \times 2 n_s}{\mathrm{Pf}} \left[ \frac{X_i-X_j}{X_i+X_j}\right] \end{aligned} \end{aligned}$$
(6.1)

At large time, we want to eliminate the time factor in the exponential, hence we perform the change of variables

$$\begin{aligned} X=t^{-1/3}{\tilde{X}}, \qquad r=t^{1/3}{\tilde{r}}, \qquad A+\frac{1}{2}=at^{-1/3}, \qquad B+\frac{1}{2}=bt^{-1/3}. \end{aligned}$$
(6.2)

In the large time limit, the Gamma, cosine and sine functions simplify using that for small positive argument

$$\begin{aligned} \Gamma (x)\simeq \frac{1}{x}, \qquad \cos (x)\simeq 1, \qquad \sin (x) \simeq x. \end{aligned}$$
(6.3)

Under these simplifications, the partition sum at fixed number of strings reads in the limit \(t \rightarrow +\infty \) (dropping all tildes)

$$\begin{aligned} \begin{aligned}&\mathsf{Z}(n_s,\varsigma ) \\&\quad = (-1)^{n_s}\prod _{p=1}^{n_s} \int _{\mathbb {R}} {\mathrm {d}}r_p \frac{\varsigma }{\varsigma + e^{ -t^{1/3}r_p }} \iint _{C^2} \frac{{\mathrm {d}}X_{2p-1}}{2\mathbf{i }\pi } \frac{{\mathrm {d}}X_{2p}}{2\mathbf{i }\pi } \frac{1}{4X_{2p-1}} \frac{a+X_{2p}}{a- X_{2p}}\frac{a+X_{2p-1}}{a- X_{2p-1}} \\&\qquad \times \frac{b+X_{2p}}{b- X_{2p}}\frac{b+X_{2p-1}}{b- X_{2p-1}} e^{-(X_{2p-1}+X_{2p}) r_p + \frac{X_{2p-1}^3}{3} + \frac{X_{2p}^3}{3} } \underset{2 n_s \times 2 n_s}{\mathrm{Pf}} \left[ \frac{X_i-X_j}{X_i+X_j}\right] \end{aligned}\nonumber \\ \end{aligned}$$
(6.4)

The contours C have now to be understood as \(C={\tilde{a}}+\mathbf{i }{\mathbb {R}}\), where \({\tilde{a}}\in (0,\min \lbrace a,b\rbrace )\). We emphasize that the contours all lie at the left of the poles at \(X=a\) and \(X=b\). We may now use (5.14) and (5.16) to write the Laplace transform \(g(\zeta )\) under the scalings in (6.2) as the Pfaffian \(\mathrm {Pf}[J-\sigma _{\zeta }K^{(a,b)}]_{L^2({\mathbb {R}})} \). The matrix valued kernel \(K^{(a, b)}\) reads in this limit

$$\begin{aligned} \begin{aligned} K^{(a, b)}_{11}(r,r')&=\frac{1}{4}\iint _{C^2}\frac{{\mathrm {d}}w}{2\mathbf{i }\pi }\frac{{\mathrm {d}}z}{2\mathbf{i }\pi }\frac{w-z}{w+z}\frac{1}{wz}\frac{a+w}{a-w}\frac{a+z}{a-z} \frac{b+w}{b-w}\frac{b+z}{b-z} e^{-rw-r'z + \frac{w^3+z^3}{3} }\\ K^{(a, b)}_{22}(r,r')&=\frac{1}{4}\iint _{C^2} \frac{{\mathrm {d}}w}{2\mathbf{i }\pi }\frac{{\mathrm {d}}z}{2\mathbf{i }\pi }\frac{w-z}{w+z}\frac{a+w}{a-w}\frac{a+z}{a-z} \frac{b+w}{b-w}\frac{b+z}{b-z}e^{ -rw-r'z + \frac{w^3+z^3}{3} }\\ K^{(a, b)}_{12}(r,r')&=\frac{1}{4}\iint _{C^2} \frac{{\mathrm {d}}w}{2\mathbf{i }\pi }\frac{{\mathrm {d}}z}{2\mathbf{i }\pi }\frac{w-z}{w+z}\frac{1}{w}\frac{a+w}{a-w}\frac{a+z}{a-z}\frac{b+w}{b-w}\frac{b+z}{b-z} e^{ -rw-r'z + \frac{w^3+z^3}{3} } \end{aligned} \end{aligned}$$
(6.5)

Remark 6.1

The kernel \(K^{(a, b)}\) has a particular structure, indeed its elements are related through derivative identities: \(K^{(a, b)}_{22}(r,r')=\partial _r \partial _{r'} K^{(a, b)}_{11}(r,r')\), \(K^{(a, b)}_{12}(r,r')=-\partial _{r'} K^{(a, b)}_{11}(r,r')\) and \(K^{(a, b)}_{22}(r,r')=-\partial _r K^{(a, b)}_{12}(r,r')\).

Remark 6.2

The kernel \(K^{(a, b)}\) can be obtained equivalently from the kernel (2.12) by rescaling, as in [56].

Finally, choosing the variable \(\varsigma \) as \(\varsigma =e^{-st^{1/3}}\), at large time we have \(\lim _{t\rightarrow +\infty } \sigma _{\varsigma }(rt^{1/3})=\Theta (r-s)\), where \(\Theta \) is the Theta Heaviside function. The Fredholm Pfaffian formula for the generating function then becomes in the limit

$$\begin{aligned} \lim _{t\rightarrow +\infty }g(\varsigma =e^{-st^{1/3}})={\mathrm{Pf}}(J-K^{(a, b)})_{{\mathbb {L}}^2(s,+\infty )}. \end{aligned}$$
(6.6)

On the other hand, at large time, the Laplace transform of the distribution of the exponential of the KPZ height converges towards the cumulative probability of the height (see [78, Lemma 4.1.39]), i.e.

$$\begin{aligned} \begin{aligned} g(\varsigma =e^{-st^{1/3}})={\mathbb {E}}\left[ \exp (- We^{H(t)-st^{1/3}}) \right]&\simeq _{t \rightarrow +\infty } {\mathbb {E}}\left[ \Theta (st^{1/3}-H(t)-\log W) \right] \\&\simeq _{t \rightarrow +\infty } {\mathbb {P}}\left( \frac{H(t)+\log W}{t^{1/3}}\leqslant s\right) \end{aligned} \end{aligned}$$
(6.7)

where \(\Theta \) is the Theta Heaviside function.

Note that since \(W\) is an inverse Gamma random variable with parameter \(A+B+1=t^{-1/3}(a+b)\), the random variable \(\log (W)/t^{1/3}\) weakly converges to an exponential random variable with parameter \(a+b\).

At this point, we obtain that for any \(a, b>0\),

$$\begin{aligned} \lim _{t \rightarrow +\infty } {\mathbb {P}}\left( \frac{H(t) }{t^{1/3}}\leqslant s-{\mathcal {E}}\right) ={\mathrm{Pf}}(J-K^{(a, b)})_{{\mathbb {L}}^2(s,+\infty )} \end{aligned}$$
(6.8)

where \({\mathcal {E}}\) is an exponential random variable with parameter \(a+b\) independent from H(t), and the matrix kernel \(K^{(a, b)}\) is given in (6.5). Using the density of the exponential distribution, and denoting \(F^{(a,b)}(s) = \lim _{t\rightarrow \infty }{\mathbb {P}}\left( \frac{H(t) }{t^{1/3}}\leqslant s\right) \), we may rewrite (6.8) as

$$\begin{aligned} \int _{0}^{+\infty } {{\mathrm {d}}} x\, (a+b)e^{-x(a+b)} F^{(a,b)}(s-x) = {\mathrm{Pf}}(J-K^{(a, b)})_{{\mathbb {L}}^2(s,+\infty )}. \end{aligned}$$
(6.9)

Following [105, Eq. (4.3)] (see also Remark 5.1), we differentiate in s in (6.9) and use integration by parts in the left hand side. We obtain

$$\begin{aligned} \partial _s {\mathrm{Pf}}(J-K^{(a, b)})_{{\mathbb {L}}^2(s,+\infty )}&= (a+b)F^{(a,b)}(s) - (a+b)^2 \int _{0}^{+\infty } {{\mathrm {d}}} x\, e^{-x(a+b)} F^{(a,b)}(s-x) \\&= (a+b)F^{(a,b)}(s)- (a+b) {\mathrm{Pf}}(J-K^{(a, b)})_{{\mathbb {L}}^2(s,+\infty )}. \end{aligned}$$

Finally, we can write

$$\begin{aligned} \lim _{t \rightarrow +\infty } {\mathbb {P}}\left( \frac{H(t) }{t^{1/3}}\leqslant s\right) =\left( 1+\frac{\partial _s}{a+b} \right) {\mathrm{Pf}}(J-K^{(a, b)})_{{\mathbb {L}}^2(s,+\infty )} := F^{(a,b)}(s). \end{aligned}$$
(6.10)

In the next sections, we show that the distribution \(F^{(a,b)}\) interpolates between various known distribution as \(b\) or \(a\) goes to infinity. The most interesting case, corresponding to stationary growth, is when \(b, a\) go to zero and will be studied in details in Sect. 7.

Remark 6.3

Performing the large time limit at any fixed \(A > -1/2\) and \(B>-1/2\) corresponds – modulo an exchange of limits – to the scaling considered above with \(a,b=+\infty \). One can show that in the limit \(a,b\rightarrow +\infty \), the kernel (6.5) converges to the GSE matrix kernel, by the same manipulations as in [56, Sect. 4.1]. Indeed, as the contours of kernel \(K^{(a, b)}\) are parallel to the imaginary axis and cross the real axis between 0 and \(\min \lbrace a,b\rbrace \), we can take the limit \(a,b\rightarrow \infty \) in the integrand without affecting the contours. All rational functions involving the parameter \(a, b\) in the large time limit of the kernel \(K^{(a, b)}\) in (6.5) converge to the value \(-1\). Hence in this limit we obtain the kernel \(K^\infty \) given in [56, Eq. (113)], which is precisely the kernel associated to the Gaussian Symplectic Ensemble (GSE) of random matrices as also given in Lemma 2.7. of [44]. Hence this shows that the distribution of the height at \(x=0\) converges at large time for boundary conditions such that \(a, b\rightarrow \infty \) (e.g. for any fixed \(A,B>-1/2\)) to the GSE Tracy–Widom distribution, as we will also show below.

Remark 6.4

In the limit \(b\rightarrow +\infty \), the kernel \(K^{(a,b)}\) in (6.5) converges to the kernel \(K^{\epsilon }\) with \(\epsilon =a\) obtained in [56, Eq. (64)] in the study of the droplet initial condition. The Fredholm Pfaffian \(F^{(a)}(s):={\mathrm{Pf}}(J-K^{(a,+\infty )})_{{\mathbb {L}}^2(s, +\infty )}\) interpolates between the CDF of the GSE Tracy–Widom distribution, \(F_4(s)\), (at \(a\rightarrow +\infty \)) and CDF of the GOE Tracy–Widom distribution function, \(F_1(s)\), (at \(a=0\)). Note that the GOE kernel obtained in [56, Eq. (83)] was found to provide a new representation of the GOE Tracy–Widom distribution. If instead we take \(a\rightarrow +\infty \) for fixed \(b\), we obtain again the same kernel \(K^{b}\) because of the symmetry \(a\leftrightarrow b\). Physically, it describes the CDF of the distribution of the rescaled height of the KPZ equation with Brownian initial data and Dirichlet boundary condition.

7 From a Matrix Valued Kernel to a Scalar Kernel

7.1 Solution for the KPZ Generating Function at All Times for Generic AB in Terms of a Scalar Kernel

The general kernel we have obtained in (2.12) has a particular structure in the form of a Schur Pfaffian. With this structure, the kernel verifies the hypothesis of Proposition B.2 of [54] recalled in Lemma B.2 in Appendix B. This proposition states that we can transform the Fredholm Pfaffian of (5.17) which involves a matrix valued kernel, into a Fredholm determinant of a scalar kernel. To proceed, let us first define the functions

$$\begin{aligned} \begin{aligned} G(w)&= \frac{\Gamma (A+\frac{1}{2}-w)}{\Gamma (A+\frac{1}{2}+w)}\frac{\Gamma (B+\frac{1}{2}-w)}{\Gamma (B+\frac{1}{2}+w)}\Gamma (2w)\\ f_{\mathrm{odd}}(r)&=\int _C \frac{{\mathrm {d}}w}{2\mathbf{i }\pi }G(w)\cos (\pi w)e^{-rw +t\frac{w^3}{3}}\\ f_{\mathrm{even}}(r)&=\int _C \frac{{\mathrm {d}}z}{2\mathbf{i }\pi }G(z)\frac{\sin (\pi z)}{\pi }e^{-rz +t\frac{z^3}{3}} \end{aligned} \end{aligned}$$
(7.1)

and the kernel \({\bar{K}}_{t,\varsigma }\) such that for all \((x,y)\in {\mathbb {R}}_+^2\)

$$\begin{aligned} \begin{aligned} {\bar{K}}_{t,\varsigma }(x,y)&=2\partial _x \int _{\mathbb {R}} {\mathrm {d}}r \frac{\varsigma }{\varsigma + e^{-r}}[f_{\mathrm{even}}(r+x)f_{\mathrm{odd}}(r+y)-f_{\mathrm{odd}}(r+x)f_{\mathrm{even}}(r+y)]\\&=2\partial _x \int _{\mathbb {R}}{\mathrm {d}}r \frac{\varsigma }{\varsigma + e^{-r}}\\&\quad \times \iint _{C^2} \frac{{\mathrm {d}}w {\mathrm {d}}z}{(2\mathbf{i }\pi )^2 }G(z)G(w) \frac{\sin (\pi (z-w))}{\pi }e^{-xz-yw -rw-rz + t \frac{w^3+z^3}{3} }\\&{=2\partial _x \iint _{C^2} \frac{{\mathrm {d}}w {\mathrm {d}}z}{(2\mathbf{i }\pi )^2 }G(z)G(w) \frac{\sin (\pi (z-w))}{\sin (\pi (z+w))}\varsigma ^{w+z}e^{-xz-yw + t \frac{w^3+z^3}{3} } } \end{aligned} \end{aligned}$$
(7.2)

Then, the Laplace transform of the one-point distribution of the exponential of the KPZ height admits the following representation:

$$\begin{aligned} g(\varsigma )={\mathbb {E}}\left[ \exp (-\varsigma We^{H(t)}) \right] ={\mathrm{Pf}}(J-\sigma _\varsigma K)_{{\mathbb {L}}^2({\mathbb {R}})}=\sqrt{\mathrm {Det}(I-{\bar{K}}_{t,\varsigma })_{{\mathbb {L}}^2({\mathbb {R}}_+)}}. \end{aligned}$$
(7.3)

7.2 Large Time Limit of the Scalar Kernel

To deduce the large time asymptotics of H(t) from the determinantal formula (7.3), one performs the same rescaling as in Sec. 6, namely one chooses \(\varsigma = e^{- t^{1/3} s}\) and one rescales \((w,z) \rightarrow t^{-1/3} (w,z)\), \(r \rightarrow t^{1/3} r\). The kernel \({\bar{K}}_{t,\varsigma }\) becomes

$$\begin{aligned} {\bar{K}}^{(a, b)}(x,y) =\frac{1}{2} \iint _{C^2} \frac{{\mathrm {d}}w {\mathrm {d}}z}{(2\mathbf{i }\pi )^2 }\frac{a+w}{a-w}\frac{b+w}{b-w} \frac{a+z}{a-z}\frac{b+z}{b-z}\frac{w-z}{w+z}\frac{1}{w} e^{-xz-yw + \frac{w^3+z^3}{3} }, \end{aligned}$$
(7.4)

where the contour C is an upwardly oriented vertical line with real part between 0 and \(\min \lbrace a, b\rbrace \) as previously. Then, using the defintion of \(F^{(a, b)}\) from (6.10), the determinantal formula (7.3) implies the following: for any \(a, b>0\),

$$\begin{aligned} F^{(a, b)}(s) = \lim _{t \rightarrow +\infty } {\mathbb {P}}\left( \frac{H(t) }{t^{1/3}}\leqslant s\right) =\left( 1+\frac{\partial _s}{a+b} \right) \sqrt{\mathrm {Det}(I-{{\bar{K}}}^{(a, b)})_{{\mathbb {L}}^2(s,+\infty )}} \end{aligned}$$

where \({{\bar{K}}}^{(a,b)}\) is defined in (7.4). Using \(\frac{1}{2} \frac{w-z}{(w+z)w} = \frac{1}{w+z} -\frac{1}{2w}\), we obtain that

$$\begin{aligned} {{\bar{K}}}^{(a, b)}(x,y) = \int _{0}^{+\infty } {{\mathrm {d}}} \lambda A^{(a, b)}(x+\lambda )A^{(a, b)}(y+\lambda )\, -\frac{1}{2} A^{(a, b)}(x) \int _{0}^{+\infty }A^{(a, b)}(y+\lambda )\, {{\mathrm {d}}}\lambda , \end{aligned}$$
(7.5)

where the function \(A^{(a, b)}(x)\) is defined by

$$\begin{aligned} A^{(a, b)}(x) = \int \frac{{{\mathrm {d}}} z}{2\mathbf{i }\pi } \frac{a+z}{a-z} \frac{b+z}{b-z} e^{-xz+ \frac{z^3}{3}}, \end{aligned}$$
(7.6)

where the contour is a vertical line with real part between 0 and \(\min \lbrace a, b\rbrace \). Note that the function \(A^{(a, b)}\) has exponential decay at \(+\infty \), that is for any \(c\in (0, \min \lbrace a, b\rbrace )\), there exist \(C\in {\mathbb {R}}\) such that \(\left| A^{(a, b)}(x)\right| \leqslant Ce^{-cx}\). Let us introduce an operator \({\hat{A}}_s\) acting on \({\mathbb {L}}^2(0,+\infty )\) with kernel

$$\begin{aligned} {\hat{A}}_s(x,y)= A^{(a, b)}(x+y+s), \end{aligned}$$
(7.7)

and an operator \({\bar{K}}^{(a,b)}_s\) acting on \({\mathbb {L}}^2(0,+\infty )\) with kernel \({\bar{K}}^{(a,b)}_s(x, y) := {\bar{K}}^{(a,b)}(x+s, y+s)\).

Claim 7.1

For any \(s\in {\mathbb {R}}\), and \(a, b>0\),

$$\begin{aligned} \sqrt{\mathrm {Det}(I - {{\bar{K}}}^{(a, b)}_s)} = \frac{1}{2} \left( \mathrm {Det}(I - {\hat{A}}_s) + \mathrm {Det}(I + {\hat{A}}_s)\right) , \end{aligned}$$
(7.8)

where all operators act on \({\mathbb {L}}^2(0,+\infty )\).

Proof

Equation 7.5 implies that, as operators acting on \({\mathbb {L}}^2(0,+\infty )\), we have

$$\begin{aligned} {{\bar{K}}}^{(a, b)}_s= {\hat{A}}_s^2 - \frac{1}{2} \vert {\hat{A}}_s \delta \rangle \langle 1 {\hat{A}}_s\vert . \end{aligned}$$

At this point, we recognize that the operator \({{\bar{K}}}^{(a, b)}_s\) has the same structure as in [87] and we may apply the same steps as Equations (32)–(35) therein. More precisely, we may use the matrix determinant Lemma to obtain that

$$\begin{aligned} \mathrm {Det}\left( I - {{\bar{K}}}_s \right) = \mathrm {Det}\left( I-{\hat{A}}_s^2 \right) \left( 1+\frac{1}{2} \langle 1\vert \frac{{\hat{A}}_s^2}{I-{\hat{A}}_s^2} \vert \delta \rangle \right) . \end{aligned}$$
(7.9)

Then, we use the decomposition

$$\begin{aligned} \frac{{\hat{A}}_s^2}{I-{\hat{A}}_s^2} = -I+ \frac{1}{2} \left( \frac{I}{I-{\hat{A}}_s} +\frac{I}{I+{\hat{A}}_s} \right) , \end{aligned}$$

and recall that \(\langle 1\vert I\vert \delta \rangle =1\). Using (the proof of) [109, Proposition 1] we have

$$\begin{aligned} \langle 1\vert \frac{I}{I\pm {\hat{A}}_s} \vert \delta \rangle = \frac{\mathrm {Det}\left( I\mp {\hat{A}}_s \right) }{\mathrm {Det}\left( I\pm {\hat{A}}_s \right) }. \end{aligned}$$
(7.10)

Plugging (7.10) into (7.9) yields the statement of the Lemma. \(\square \)

Remark 7.2

The identity (7.10) is true for any kernel of the type \(B_s(x,y)=B(x+y+s)\) such that B has sufficient decay at \(+\infty \) so that \(\mathrm {Det}(I\pm B_s)\) and \(\langle 1\vert \frac{I}{I+ B_s} \vert \delta \rangle \) converges to 1 as s goes to \(+\infty \) (see the proof of [109, Proposition 1] for details). In our case, this condition is satisfied due to the exponential decay of the function \(A^{(a, b)}\) for fixed \(a, b>0\)).

Hence one has

$$\begin{aligned} F^{(a, b)}(s) = \frac{1}{2} \left( 1 + \frac{\partial _s}{a+b}\right) \left( \mathrm {Det}(I - {\hat{A}}_s) + \mathrm {Det}(I + {\hat{A}}_s)\right) \end{aligned}$$
(7.11)

7.3 Limit \(a, b\rightarrow 0\): The Critical Stationary Case

7.3.1 CDF in Terms of a Fredholm Determinant

Moving the contour to the right in the definition of \(A^{(a, b)}\) in (7.6), we obtain

$$\begin{aligned} A^{(a, b)}(x) = {{\tilde{A}}}^{(a, b)}(x) + 2 \frac{a+b}{a-b} (h_b(x) - h_a(x) ), \end{aligned}$$
(7.12)

with \(h_b(x)=be^{- x b+ b^3/3}\) and

$$\begin{aligned} {{\tilde{A}}}^{(a, b)}(x) = \int \frac{{\mathrm {d}}z}{2 \mathbf{i }\pi } \frac{a+z}{a-z} \frac{b+z}{b-z} e^{-x z + z^3/3} = {\mathrm{Ai}}(x) + 2 (a+ b) \int _x^{+\infty }{\mathrm {d}}\lambda \, \mathrm {Ai}(\lambda ) + {\mathcal {O}}(b^2,a^2,ba), \end{aligned}$$
(7.13)

where in the integral over z, the contour passes to the right of \(b, a\). We introduce the operator \({{\tilde{A}}}_s\) acting on \({\mathbb {L}}^2(0,+\infty )\) with kernel

$$\begin{aligned} {{\tilde{A}}}_s (x,y)= {{\tilde{A}}}^{(a, b)}(s+x+y). \end{aligned}$$

We thus write

$$\begin{aligned} {\hat{A}}_s(x,y)= {{\tilde{A}}}_s(x,y) + 2 \frac{a+b}{a-b} ( |f_b(x) \rangle \langle f_b(y) | - |f_a(x) \rangle \langle f_a(y) | ), \end{aligned}$$
(7.14)

with \(f_b(x) = \sqrt{b} e^{b^3/6 - bs/2 -bx}\). Using the matrix determinant lemma, we have

$$\begin{aligned} \mathrm {Det}(I \mp {\hat{A}}_s) = \mathrm {Det}(I \mp {{\tilde{A}}}_s) \left( \left( 1 \mp 2 \frac{a+b}{a-b} I_{b,b} \right) \left( 1 \pm 2 \frac{a+b}{a-b} I_{a,a} \right) + 4 \left( \frac{a+b}{a-b}\right) ^2 I_{b,a} I_{a,b} \right) \end{aligned}$$
(7.15)

where

$$\begin{aligned} I_{\alpha ,\beta } = \langle f_\alpha | f_\beta \rangle \pm \langle f_\alpha | \frac{{{\tilde{A}}}_s}{I \mp {{\tilde{A}}}_s} | f_\beta \rangle . \end{aligned}$$
(7.16)

We now consider the limit \(b,a\rightarrow 0\) with a fixed arbitrary ratio \(r=a/b\). We use the exact expressions for the scalar products

$$\begin{aligned} \langle f_\alpha | f_\beta \rangle = \frac{\sqrt{\alpha \beta }}{\alpha + \beta } e^{\frac{\alpha ^3}{6} + \frac{\beta ^3}{6} - \frac{\alpha +\beta }{2} s} \end{aligned}$$
(7.17)

as well as

$$\begin{aligned} \mathinner {\Big \langle {f_\alpha }|}\frac{{{\tilde{A}}}_s}{I \mp {{\tilde{A}}}_s} \mathinner {|{ f_\beta }\Big \rangle } = \sqrt{\alpha \beta } e^{\frac{\alpha ^3}{6} + \frac{\beta ^3}{6} } e^{- \frac{\alpha +\beta }{2} s} \mathinner {\Big \langle {e^{-\alpha x} }|} \frac{{{\tilde{A}}}_s}{I \mp {{\tilde{A}}}_s} \mathinner {|{ e^{-\beta x}}\Big \rangle } \end{aligned}$$
(7.18)

One has

$$\begin{aligned} I_{b,b}= & {} e^{-bs} \left( \frac{1}{2} \pm bR^\mp _s + {\mathcal {O}}(b^2) \right) \quad , \quad R^\mp _s =\mathinner {\Big \langle {1}|} \frac{{{\tilde{A}}}_s}{I \mp {{\tilde{A}}}_s} \mathinner {|{1}\Big \rangle } \end{aligned}$$
(7.19)
$$\begin{aligned} I_{a,a}= & {} e^{-as} \left( \frac{1}{2} \pm aR^\mp _s + {\mathcal {O}}(a^2) \right) \end{aligned}$$
(7.20)
$$\begin{aligned} I_{b,a}= & {} I_{a,b}= \sqrt{ab} e^{-(b+a) s/2} \left( \frac{1}{a+b} \pm R^\mp _s + {\mathcal {O}}(a,b) \right) \end{aligned}$$
(7.21)

Plugging these asymptotics in (7.15) we obtain

$$\begin{aligned} \mathrm {Det}(I - {\hat{A}}_s)= & {} {\mathcal {O}}(b^2) \end{aligned}$$
(7.22)
$$\begin{aligned} \mathrm {Det}(I + {\hat{A}}_s)= & {} \mathrm {Det}(I + {{\tilde{A}}}_s)\times 2b(1+r)(2 R^{+}_s+s)+ {\mathcal {O}}(b^2). \end{aligned}$$
(7.23)

Furthermore, if we keep only the first order in \(b\), we may replace \({{\tilde{A}}}_s(x,y)\) by \( {\mathrm{Ai}}(x+y+s)\) as \(a,b\rightarrow 0\). Thus, we have found that

$$\begin{aligned} F(s) := F^{0,0}(s) = \partial _s \left[ \mathrm {Det}(I + \mathrm {Ai}_s)(2 R^{+,0}_s+s) \right] \end{aligned}$$
(7.24)

where

$$\begin{aligned} R^{+,0}_s = \mathinner {\Big \langle {1}|} \frac{\mathrm {Ai}_s }{I + \mathrm {Ai}_s } \mathinner {|{1}\Big \rangle } \end{aligned}$$
(7.25)

and \({\mathrm{Ai}}_s\) is the operator with kernel \({\mathrm{Ai}}(x+y+s)\). Using the Sherman-Morrison formula, we have that

$$\begin{aligned} \mathinner {\Big \langle {1}|} \frac{\mathrm {Ai}_s }{I + \mathrm {Ai}_s } \mathinner {|{1}\Big \rangle } = \mathinner {\Big \langle {1}|} \frac{I }{I + \mathrm {Ai}_s } \mathinner {|{\mathrm {Ai}_s 1}\Big \rangle } = \frac{\mathrm {Det}(I + \mathrm {Ai}_s+ | \mathrm {Ai}_s 1 \rangle \langle 1|)}{\mathrm {Det}(I + \mathrm {Ai}_s)}-1. \end{aligned}$$
(7.26)

which yields the following alternative formula in terms of Fredholm determinants

$$\begin{aligned} F(s) = \partial _s \left[ 2 \mathrm {Det}(I + \mathrm {Ai}_s+| \mathrm {Ai}_s 1 \rangle \langle 1| ) + (s-2)\mathrm {Det}(I + \mathrm {Ai}_s)\right] . \end{aligned}$$
(7.27)

which is given in the Introduction in (2.26).

7.3.2 CDF in Terms of the Solution of the Painlevé II Equation

In this Section, we show that the distribution F(s) can also be written in terms of the Tracy–Widom distributions \(F_1(s)\) and \(F_2(s)\) only. We also provide formulae in terms of the Hastings–McLeod solution of Painlevé II equation (see Appendix C). Define q(s) to be the solution of the Painlevé II equation for \(s\in {\mathbb {R}}\),

$$\begin{aligned} q''(s)= s q(s) +2q(s)^3 \end{aligned}$$
(7.28)

which satisfies the asymptotic condition \(q(s) \sim _{s \rightarrow +\infty } \mathrm {Ai}(s)\). This solution is called the Hastings–McLeod solution of the Painlevé II equation [110].

Let us first recall some results from [111]. Combining the formula 4.18 and the one just below 4.52 in [111] we can write

$$\begin{aligned} \mathinner {\Big \langle {1}|}\frac{I}{I+\mathrm {Ai}_s} \mathinner {|{\delta }\Big \rangle }=e^{-\int _s^{+\infty } {\mathrm {d}}t \, q(t)}, \end{aligned}$$
(7.29)

Remark 7.3

One has, see [111, Eq. 4.22], \(q(s)= \mathinner {\Big \langle {\delta }|}\frac{\mathrm {Ai}_s}{1-K_{\mathrm {Ai},s}}\mathinner {|{\delta }\Big \rangle }\).

Another result is obtained combining Remark 7.2 and (7.29) as

$$\begin{aligned} \mathinner {\Big \langle {1}|}\frac{I}{I\pm \mathrm {Ai}_s}\mathinner {|{\delta }\Big \rangle }=\frac{\mathrm {Det}(I\mp \mathrm {Ai}_s)}{\mathrm {Det}(I\pm \mathrm {Ai}_s)}=e^{\mp \int _s^{+\infty } {\mathrm {d}}t \, q(t)}=\left( \frac{F_1(s)^2}{F_2(s)}\right) ^{\pm 1}. \end{aligned}$$
(7.30)

where the last identity is from [112].

The next result allows to rewrite the resolvant \(R_s^{+,0}\) appearing in (7.24) and (7.25) in terms of q.

Proposition 7.4

We have

$$\begin{aligned} \mathinner {\Big \langle {1}|}\frac{\mathrm {Ai}_s}{I+\mathrm {Ai}_s}\mathinner {|{1}\Big \rangle }=\frac{1}{2}\left( \int _{-\infty }^s {\mathrm {d}}r \, e^{-2\int _r^{+\infty } {\mathrm {d}}t \, q(t)} -s\right) . \end{aligned}$$
(7.31)

Proof

Before proving this proposition, we need the following known lemma.

Lemma 7.5

([109, Lemma 3]) We have the following relation

$$\begin{aligned} \partial _s\frac{\mathrm {Ai}_s}{I+\mathrm {Ai}_s}=-\frac{\mathrm {Ai}_s}{I-K_{\mathrm {Ai},s}} D - \frac{\mathrm {Ai}_s}{I-K_{\mathrm {Ai},s}}\mathinner {|{\delta }\Big \rangle } \mathinner {\Big \langle {\delta }|} \frac{I}{I+\mathrm {Ai}_s} \end{aligned}$$
(7.32)

where D is the derivative operator.

Since \(D\mathinner {|{1}\Big \rangle }=0\) we have

$$\begin{aligned} \begin{aligned} \partial _s \mathinner {\Big \langle {1}|}\frac{\mathrm {Ai}_s}{I+\mathrm {Ai}_s}\mathinner {|{1}\Big \rangle }&=-\mathinner {\Big \langle {1}|}\frac{\mathrm {Ai}_s}{I-K_{\mathrm {Ai},s}}\mathinner {|{\delta }\Big \rangle }\mathinner {\Big \langle {\delta }|}\frac{I}{I+\mathrm {Ai}_s}\mathinner {|{1}\Big \rangle }\\&= -\frac{1}{2}\mathinner {\Big \langle {1}|}\left( \frac{I}{I-\mathrm {Ai}_s}-\frac{I}{I+\mathrm {Ai}_s}\right) \mathinner {|{\delta }\Big \rangle }\mathinner {\Big \langle {\delta }|}\frac{I}{I+\mathrm {Ai}_s}\mathinner {|{1}\Big \rangle }\\&= -\frac{1}{2}\left( \frac{\mathrm {Det}(I+\mathrm {Ai}_s)}{\mathrm {Det}(I-\mathrm {Ai}_s)}-\frac{\mathrm {Det}(I-\mathrm {Ai}_s)}{\mathrm {Det}(I+\mathrm {Ai}_s)} \right) \times \frac{\mathrm {Det}(I-\mathrm {Ai}_s)}{\mathrm {Det}(I+\mathrm {Ai}_s)}\\&= -\frac{1}{2}\left( 1-\frac{F_1(s)^4}{F_2(s)^2} \right) \end{aligned} \end{aligned}$$
(7.33)

Using the expressions of \(F_1\) and \(F_2\) in terms of the Painlevé II equation, we have

$$\begin{aligned} \partial _s \mathinner {\Big \langle {1}|}\frac{\mathrm {Ai}_s}{I+\mathrm {Ai}_s}\mathinner {|{1}\Big \rangle } =-\frac{1}{2}\left( 1-e^{-2\int _s^{+\infty } {\mathrm {d}}t \, q(t)} \right) \end{aligned}$$
(7.34)

Recalling the notation \(R_s^{+,0}=\mathinner {\Big \langle {1}|}\frac{\mathrm {Ai}_s}{1+\mathrm {Ai}_s}\mathinner {|{1}\Big \rangle }\), we have

$$\begin{aligned} \partial _s(2R_s^{+,0}+s)=e^{-2\int _s^{+\infty } {\mathrm {d}}t \, q(t)} =\frac{F_1(s)^4}{F_2(s)^2} \end{aligned}$$
(7.35)

We integrate the last quantity between \(-\infty \) and s using that \(q(s) \rightarrow +\infty \) for \(s \rightarrow -\infty \) as in (C.4) and obtain

$$\begin{aligned} 2R_s^{+,0}+s=\int _{-\infty }^{s} {\mathrm {d}}r \, e^{-2\int _r^{+\infty } {\mathrm {d}}t \, q(t)} + \kappa . \end{aligned}$$
(7.36)

Since \(R_s^{+,0} \rightarrow 0\) in the limit \(s\rightarrow +\infty \), and using (C.2) and the asymptotics (C.5) and (C.6), we obtain that \(\kappa =0\), which concludes the proof. \(\square \)

Using Proposition 7.4 we arrive at the following equivalent expressions for F(s) (defined in (7.24)). The first one reads

$$\begin{aligned} F(s)=\partial _s\left[ \frac{F_2(s)}{F_1(s)} \int _{-\infty }^s {\mathrm {d}}t \, \frac{F_1(t)^4}{F_2(t)^2} \right] . \end{aligned}$$
(7.37)

where the first line was given in (2.28). The second formula is expressed in terms of the Hastings–McLeod solution of the Painlevé II equation (see Appendix C) and reads

$$\begin{aligned} \begin{aligned} F(s)&=\partial _s\left[ e^{ -\frac{1}{2} \int _s^{+\infty } {{\mathrm {d}}} r [ (r-s)q(r)^2-q(r) ]} \int _{-\infty }^{s} {\mathrm {d}}r \, e^{-2\int _r^{+\infty } {\mathrm {d}}t \, q(t)} \right] \\&=\partial _s\left[ e^{ -\frac{1}{2} \int _s^{+\infty } {{\mathrm {d}}} r \, [(r-s)q(r)^2+3q(r)]}(s-2q'(s)+2q(s)^2) \right] . \end{aligned} \end{aligned}$$
(7.38)

which was given in (2.29). A third formula, useful for the asymptotics, is obtained applying the derivative in front of the bracket and using Eqs. (C.2) and (C.3)

$$\begin{aligned} \begin{aligned} F(s)&=\; e^{ -\frac{1}{2} \int _s^{+\infty } {{\mathrm {d}}} r \, [ (r-s)q(r)^2+3 q(r)]} \\&\quad \times \left( 1+\frac{1}{2}(q'(s)^2-sq(s)^2-q(s)^4-q(s))(s-2q'(s)+2q(s)^2)\right) \end{aligned} \end{aligned}$$
(7.39)

7.3.3 Properties of F(s): First Moments

We check in Appendix C using the formulae (7.39) that the function F(s) has the behaviour at \( s \rightarrow \pm \infty \) that is required for a CDF, i.e. its limit at \(s= -\infty \) is 0 and its limit at \(s=+\infty \) is 1. The detailed asymptotics for \(s \rightarrow \pm \infty \) is performed in Appendix D. The CDF takes the form \(F(s)=\partial _s [\mathsf{F}(s)]\). Provided \(\mathsf{F}(s) \rightarrow 0\) sufficiently fast for \(s \rightarrow -\infty \) and \(\mathsf{F}(s) - s \rightarrow 0\) sufficiently fast for \(s \rightarrow +\infty \), conditions which can be checked from Appendix D, integration by parts give the following formula for the k-th positive integer moment \(M_k\) of the distribution F(s)

$$\begin{aligned} M_k = k(k-1) \int _{\mathbb {R}}(\mathsf{F}(s)-\max (s,0)) s^{k-2} {{\mathrm {d}}} s \end{aligned}$$
(7.40)

This formula can be used to obtain the moments and the cumulants through a numerical evaluation of F(s). One notices that the mean vanishes, \(M_1=0\), which indeed must be the case since \({\mathbb {E}}[H(t)]=0\) in the critical stationary case (see Sect. 4.3 where we have computed the expectation of h(xt) using the stationary structure of the log-gamma polymer). We used two numerical methods to evaluate F(s). The first one uses Eq. (7.27) where the Fredholm determinants are calculated using the method described in Ref. [76, 113]. The second method uses the formula (7.37) and uses the Mathematica routines for \(F_{1,2}(s)\), and is in agreement with the first one. The CDF F(s) and its derivative \(F'(s)\) are plotted in Fig. 3. The mean, variance, skewness and excess kurtosis are given in Table 1.

7.4 Limit \(a, b\rightarrow +\infty \): Convergence to the GSE

As we already discussed in Remark 6.3 the limit \(a, b\rightarrow +\infty \) can be performed on the Fredholm pfaffian formula and leads to GSE Tracy–Widom fluctuations. This limit can also be performed on the formula (7.11) involving the scalar kernel \({\hat{A}}_s\) defined in (7.7) in terms of the function \(A^{(a, b)}(x)\) defined in (7.6). It is clear from the definition of \(A^{(a, b)}(x)\) that if \(a,b\rightarrow +\infty \) simultaneously, then \(A^{(a, b)}(x)\) converges to the standard Airy function. The CDF \(F^{(a,b)}(s)\) in (7.11), for \(a, b\rightarrow +\infty \) then takes the form of the GSE Tracy–Widom distribution found in [53, Eq (35)]. This result thus matches smoothly with the result (2.16) valid for any fixed \(A,B>-1/2\) in the large time limit.

7.5 Limit \((a,b) \rightarrow (0,+\infty )\): Convergence to the GOE

Another interesting limit, that we call \(F^{(a)}(s)= \lim _{b\rightarrow +\infty } F^{(a,b)}(s)\), is the limit \(b\rightarrow + \infty \) at fixed \(a\) and by the \(A \leftrightarrow B\) symmetry, the case \(A\rightarrow +\infty \) at fixed \(b\) is similar. In particular we consider now the limit \(a\rightarrow 0\).

The manipulations follow closely the ones of Section. 7.3. We start by moving the contour to the right in the definition of \(A^{(a, \infty )}\) in (7.6) already taking into account the \(b\rightarrow +\infty \) limit. We obtain \( A^{(a, \infty )}(x) = \breve{A}^{(a, \infty )}(x) + 2 h_a(y)\), with \(h_a(x)=ae^{- x a+ a^3/3}\) and

$$\begin{aligned} \breve{A}^{(a, \infty )}(x) = \int \frac{{\mathrm {d}}z}{2 \mathbf{i }\pi } \frac{a+z}{a-z} e^{-x z + z^3/3} =- {\mathrm{Ai}}(x) - 2 a\int _x^{+\infty }{\mathrm {d}}\lambda \, \mathrm {Ai}(\lambda ) + {\mathcal {O}}(a^2), \end{aligned}$$
(7.41)

where in the integral over z, the contour passes to the right of \(a\). We introduce the operator \(\breve{A}_s\) acting on \({\mathbb {L}}^2(0,+\infty )\) with kernel \( \breve{A}_s (x,y)= \breve{A}^{(a, \infty )}(s+x+y)\). We thus write

$$\begin{aligned} {\hat{A}}_s(x,y)= \breve{A}_s(x,y) + 2 |f_a(x) \rangle \langle f_a(y) | , \end{aligned}$$
(7.42)

with \(f_a(x) = \sqrt{a} e^{a^3/6 - as/2 -ax}\). Using the matrix determinant lemma, we have

$$\begin{aligned} \mathrm {Det}(I \pm {\hat{A}}_s) = \mathrm {Det}(I \pm \breve{A}_s) \left( 1 \pm 2 \langle f_a| f_a \rangle - 2 \left\langle f_a | \frac{\breve{A}_s}{I \pm \breve{A}_s} | f_a \right\rangle \right) \end{aligned}$$
(7.43)

We now consider the limit \(a\rightarrow 0\) and we use the exact expressions for the scalar product \(\langle f_a | f_a \rangle = \frac{1}{2} e^{\frac{a^3}{3} - a s} \) as well as

$$\begin{aligned} \mathinner {\Big \langle {f_a }|}\frac{ \breve{A}_s}{I \pm \breve{A}_s} \mathinner {|{ f_a }\Big \rangle } = a e^{\frac{a^3}{3} -as} \mathinner {\Big \langle {e^{-a x} }|} \frac{\breve{A}_s}{I \pm \breve{A}_s} \mathinner {|{ e^{-a x}}\Big \rangle } \end{aligned}$$
(7.44)

One has

$$\begin{aligned} \mathrm {Det}(I \pm {\hat{A}}_s) = \mathrm {Det}(I \pm \breve{A}_s) \left( 1 \pm e^{-as} - 2 e^{-as} a \mathinner {\Big \langle {1}|} \frac{\breve{A}_s}{I \pm \breve{A}_s} \mathinner {|{1}\Big \rangle } +{\mathcal {O}}(a^2)\right) \end{aligned}$$
(7.45)

Looking at formula (7.11) in the limit \(b \rightarrow +\infty \) we see that we need only the following estimates from (7.45) up to \({\mathcal {O}}(a)\) as \(a \rightarrow 0\)

$$\begin{aligned} \mathrm {Det}(I - {\hat{A}}_s)= & {} {\mathcal {O}}(a) \end{aligned}$$
(7.46)
$$\begin{aligned} \mathrm {Det}(I + {\hat{A}}_s)= & {} \mathrm {Det}(I-\mathrm {Ai}_s)(2+{\mathcal {O}}(a)). \end{aligned}$$
(7.47)

which, inserted in (7.11), lead to

$$\begin{aligned} F^{(0)}(s):=F^{(0,\infty )}(s)=F^{(\infty ,0)}(s)=\mathrm {Det}(I-\mathrm {Ai}_s) = F_1(s) \end{aligned}$$
(7.48)

This coincides with the determinantal representation of the GOE Tracy–Widom CDF. This formula matches smoothly with the result (2.17) which states that the CDF of the one-point KPZ height field is given by the GOE Tracy–Widom distribution for \(A=-1/2\) and any \(B>-1/2\).