1 Introduction

The K-Bessel function \(K_{r + i t}(y)\) (see 4.1 for the definition) appears in a number of ways in mathematics such as in the Fourier expansion of Eisenstein series (for background on Eisenstein series, see [14] for example), and these series are important automorphic functions (namely, functions invariant under a cofinite Fuchsian group) because they are eigenfunctions of the non-Euclidean Laplacian (i.e. the operator \(D:=y^2(\frac{\partial ^2}{\partial x^2}+ \frac{\partial ^2}{\partial y^2})\)).Footnote 1 In this paper, we will produce bounds for \(K_{r + i t}(y)\) in the novel case of positive, real argument y and of large complex order \(r+it\) where r is bounded and t varies linearly with y in all possible ways. In particular, we compute the dominant term of the asymptotic expansion of \(K_{r + i t}(y)\) as \(y \rightarrow \infty \) for the two cases \(t = y \sin \theta \) for a fixed parameter \(0\le \theta \le \pi /2\) (Theorem 1.1) or \(t= y \cosh \mu \) for a fixed parameter \(\mu >0\) (Theorem 1.3). (The case \(t<0\) is also handled as Remark 2.1 shows.) Note that, thus, our result is for y and t both approaching infinity. Except for the case of \(\theta =\pi /2\), we prove Theorems 1.1 and 1.3 using Laplace’s method (see [17, p. 127, Theorem 7.1] or [16] for example) in Sect. 2.

Theorems 1.1 and 1.3 are nonuniform results. In the case where t and y are nearly equal, we will find that there are two relevant saddle points. When t/y approach 1, the two saddle points coalesce and these results go to infinity. Consequently, a uniform result in this case is highly desirable. We give such a uniform result (Theorems 1.5 and 1.6) in Sect. 3. The uniform result is important not only for the completeness of the estimates for the K-Bessel function but also for applications.

One such application for estimates on K-Bessel functions is the study of Eisenstein series. As an application of our results, we will, in Sect. 5, give bounds on the weight-zero Eisenstein series \(E_0^{(j)}(z, r+it)\) for each inequivalent cusp \(\kappa _j\) when \(1/2 < r \le 3/2\). (For the case \(r=1/2\), we will use known estimates on the K-Bessel function, \(K_{i t}(y)\), to give bounds on the Eisenstein series, \(E_0^{(j)}(z, 1/2+it)\).) Already, our nonuniform results suffice to give bounds on the Fourier coefficients of these Eisenstein series (Theorem 1.10) when \(1/2 < r \le 3/2\). However, to bound the Eisenstein series themselves (Theorem 1.12), it is necessary, when \(1/2 < r \le 3/2\), to use our uniform results.

1.1 Statement of results

Let \(\nu := r +it\). Our first two results (Theorems 1.1 and 1.3) together give an asymptotic for \(K_{\nu }(y)\) for large order but bounded r (so |t| grows to infinity) and positive, real argument y. There are two cases: \(y \ge t \ge 0\) (Theorem 1.1) and \(0< y < t\) (Theorem 1.3). When \(t <0\), see Remark 2.1.

We note that more terms of the asymptotic expansions found in Theorems 1.11.31.5, and 1.6 could be computed using the techniques in this paper; however, these computations quickly become tedious and are omitted.

Theorem 1.1

Let \(M\ge 0\) and \(0\le \theta \le \pi /2\) be fixed real numbers. Let \(|r|\le M\), \(0< y \in {\mathbb R}\), and

$$\begin{aligned} t = y \sin \theta . \end{aligned}$$
(1.1)

Then

$$\begin{aligned} K_\nu (y) = {\left\{ \begin{array}{ll} \sqrt{\frac{\pi }{2 y \cos \theta }}e^{-y (\cos \theta +\theta \sin \theta )}e^{ir\theta } +O\left( y^{-3/2}e^{-y (\cos \theta +\theta \sin \theta )} \right) &{} \text { if } 0 \le \theta < \frac{\pi }{2} \\ e^{-\frac{\pi }{2} y +i\frac{\pi }{2} r}y^{-1/3}\frac{\varGamma (\frac{1}{3})}{2^{\frac{2}{3}}3^{\frac{1}{6}}} +O\left( y^{-2/3}e^{-\frac{\pi }{2} y +i\frac{\pi }{2} r} \right) &{} \text { if } \theta = \frac{\pi }{2}\end{array}\right. } \end{aligned}$$

as \(y \rightarrow \infty \). Here, the implied constants depend on \(\theta \) and M for the case \(0 \le \theta < \frac{\pi }{2}\) and on M for the case \(\theta = \frac{\pi }{2}\).

Remark 1.2

In the special case of purely imaginary order, our result agrees with standard results. As examples, see [9, p. 87 (18)] and [5, (14)] for the case \(0< \theta < \frac{\pi }{2}\) and [21, pp. 78, 247] and [5, (14)] for the case \(\theta = \frac{\pi }{2}\). Also, note that the \(\varGamma \) in the statement of the theorem refers to the gamma function.

Theorem 1.3

Let \(M \ge 0\) and \(\mu > 0\) be fixed real numbers. Let \(|r|\le M\), \(0< y \in {\mathbb R}\), and

$$\begin{aligned} t = y \cosh \mu . \end{aligned}$$
(1.2)

Then

$$\begin{aligned} K_\nu (y)&= \sqrt{\frac{2\pi }{y \sinh \mu }}e^{-y \frac{\pi }{2} \cosh \mu +i r \frac{\pi }{2}} \left[ \cosh (r \mu ) \sin \left( \frac{\pi }{4} - y \left( \sinh \mu - \mu \cosh \mu \right) \right) \right. \\&\left. \quad -\, i \sinh (r \mu )\cos \left( \frac{\pi }{4} - y \left( \sinh \mu - \mu \cosh \mu \right) \right) \right] \\&\quad +\, O\left( y^{-3/2}e^{-y\left( \frac{\pi }{2} \cosh \mu + i \left( \sinh \mu - \mu \cosh \mu \right) \right) } \right) \\&\quad +\, O\left( y^{-3/2}e^{-y\left( \frac{\pi }{2} \cosh \mu - i \left( \sinh \mu - \mu \cosh \mu \right) \right) } \right) \end{aligned}$$

as \(y \rightarrow \infty \). Here, the implied constants depend on \(\mu \) and M.

Remark 1.4

Since \(y \sinh \mu = \sqrt{t^2 - y^2}\) and \(\mu = \cosh ^{-1}(\frac{t}{y})\) hold, our result, in the special case of purely imaginary order, reduces to the standard result for purely imaginary order (see [9, p. 88 (19)] for example), namely:

$$\begin{aligned} K_{it}(y) \sim \sqrt{2\pi }(t^2 - y^2)^{-\frac{1}{4}}e^{-t \frac{\pi }{2}}\sin \left( \frac{\pi }{4} - (t^2 - y^2)^{\frac{1}{2}} + t \cosh ^{-1}\left( \frac{t}{y}\right) \right) , \end{aligned}$$

as \(y \rightarrow \infty \).

Our next two results (Theorems 1.5 and 1.6) give a uniform asymptotic for \(K_{\nu }(y)\) for large order but bounded r and positive, real argument y in the case where t and y are nearly equal (or equal). Here, there are also two cases: \(y \ge t \ge 0\) (Theorem 1.5) and \(0< y < t\) (Theorem 1.6). When \(t <0\), see Remark 2.1. Note that \({\text {Ai}}(\cdot )\) is the Airy function.

Theorem 1.5

Let \(M\ge 0\) and \(0< \theta \le \frac{\pi }{2}\) be real numbers. Let \(|r|\le M\), \(0< y \in {\mathbb R}\), and

$$\begin{aligned} t = y \sin \theta . \end{aligned}$$
(1.3)

Then there exists a (small) \(\theta _0>0\), which does not depend on t or y, such that, for all \(\frac{\pi }{2} - \theta _0 \le \theta \le \frac{\pi }{2}\), we have

$$\begin{aligned} K_\nu (y)&= \ \frac{\pi \sqrt{2}}{y^{1/3}} e^{-y \frac{\pi }{2} \sin \theta + i r \frac{\pi }{2}} \cos \left( r \theta - r \frac{\pi }{2}\right) \left( \frac{\zeta }{\cos ^2 \theta } \right) ^{1/4} {\text {Ai}}\left( y^{2/3} \zeta \right) \\&\quad - \frac{i \pi \sqrt{2}}{y^{2/3}} e^{-y \frac{\pi }{2} \sin \theta + i r \frac{\pi }{2}} \sin \left( r \theta - r \frac{\pi }{2}\right) \zeta ^{-1/2} \left( \frac{\zeta }{\cos ^2 \theta } \right) ^{1/4} {\text {Ai}}'\left( y^{2/3} \zeta \right) \\&\quad + O\left( \frac{{\text {Ai}}\left( y^{2/3} \zeta \right) e^{-y \frac{\pi }{2} \sin \theta }}{y^{4/3}}\right) + O\left( \frac{{\text {Ai}}'\left( y^{2/3} \zeta \right) e^{-y \frac{\pi }{2} \sin \theta }}{y^{5/3}}\right) \end{aligned}$$

as \(y \rightarrow \infty \). Here \(\zeta = \left[ \frac{3}{2} \left( \theta \sin \theta + \cos \theta - \frac{\pi }{2} \sin \theta \right) \right] ^{2/3}\) is a nonnegative real number and the implied constants depend on \(\theta _0\) and M.

Theorem 1.6

Let \(M \ge 0\) and \(\mu \ge 0\) be real numbers. Let \(|r|\le M\), \(0< y \in {\mathbb R}\), and

$$\begin{aligned} t = y \cosh \mu . \end{aligned}$$
(1.4)

Then there exists a (small) \(\mu _0>0\), which does not depend on t or y, such that, for all \(0 \le \mu \le \mu _0\), we have

$$\begin{aligned} K_\nu (y) =&\ \frac{\pi \sqrt{2}}{y^{1/3}} e^{-y \frac{\pi }{2} \cosh \mu + i r \frac{\pi }{2}} \cosh \left( r \mu \right) \left( \frac{\zeta }{-\sinh ^2 \mu } \right) ^{1/4} {\text {Ai}}\left( y^{2/3} \zeta \right) \\&- \frac{ \pi \sqrt{2}}{y^{2/3}} e^{-y \frac{\pi }{2} \cosh \mu + i r \frac{\pi }{2}} \sinh \left( r \mu \right) \zeta ^{-1/2} \left( \frac{\zeta }{-\sinh ^2 \mu } \right) ^{1/4} {\text {Ai}}'\left( y^{2/3} \zeta \right) \\&+ O\left( \frac{{\text {Ai}}\left( y^{2/3} \zeta \right) e^{-y \frac{\pi }{2} \cosh \mu }}{y^{4/3}}\right) + O\left( \frac{{\text {Ai}}'\left( y^{2/3} \zeta \right) e^{-y \frac{\pi }{2} \cosh \mu }}{y^{5/3}}\right) \end{aligned}$$

as \(y \rightarrow \infty \). Here \(\zeta = -\left[ \frac{3}{2} \left( \mu \cosh \mu - \sinh \mu \right) \right] ^{2/3}\) is a nonpositive real number and the implied constants depend on \(\mu _0\) and M.

Remark 1.7

We make a few observations.

  1. 1.

    When \(r=0\), our result agrees with the standard result by Balogh [3]. To see this, let us use \({\widetilde{\zeta }}\) to denote \(\zeta \) from [3] to distinguish it from our use of \(\zeta \). For the first case, letting \({\widetilde{\theta }} = \theta - \pi /2\), we note that \(\sec ^{-1}(\sec {\widetilde{\theta }}) = - {\widetilde{\theta }}\) as \({\widetilde{\theta }} <0\), which yields via a short computation that \(\zeta = {\widetilde{\zeta }} \cos ^{2/3} {\widetilde{\theta }}\). As \(y \cos {\widetilde{\theta }} = t\), the agreement follows. For the second case, we note that \(\sec ^{-1}(1/\cosh \mu ) = \cos ^{-1}(\cosh \mu ) = i \mu \), which yields the nonpositive real number \(\zeta = {\widetilde{\zeta }} \cosh ^{2/3} \mu \) and agreement.

  2. 2.

    The expressions are defined when \(\theta \rightarrow \frac{\pi }{2}\) and when \(\mu \rightarrow 0\) by Taylor approximation.

  3. 3.

    When \(r \ne 0\), there are order \(y^{-2/3}\) terms, unlike when \(r=0\).

We also give a result for small y, which will be applied in the computation of our bounds for the Eisenstein series.

Proposition 1.8

For \(3/2 \ge r \ge 1/2\), \(|t| \ge t_0\), and \(0<y <1\), we have

$$\begin{aligned}K_{r - 1/2 +i t}(y) = O( y^{1/2-r} e^{-|t|\pi /2} |t|^{r-1})\end{aligned}$$

where the implied constant depends only on \(t_0\) and is uniformly bounded for all large enough \(t_0\).

Remark 1.9

Here, \(t_0\ge 1\) is chosen to be a fixed large constant (large enough to use the first term in the Stirling asymptotic series for the gamma function for the approximation in the proof of the proposition below).

Let \(z:=x+iy, s:=r+it \in {\mathbb C}\). As an application of our above results, we compute bounds on the Eisenstein series for large enough |t|. Let \(G:= {\text {PSL}}_2({\mathbb R})\), \(\varGamma \subset G\) be a cofinite Fuchsian group, and \({\mathbb H}\) be the upper-half plane model of the hyperbolic plane (i.e. with the Poincaré metric). The group G acts transitively on the left of \({\mathbb H}\) via Möbius transformations, and, moreover, these actions are orientation-preserving isometries. We assume that \(\varGamma \backslash {\mathbb H}\) has at least one cusp, that one of these cusps is located at \(\infty \), and that the cusp \(\kappa _1:=\infty \) (called the standard cusp) has stabilizer

$$\begin{aligned} \varGamma _1:=\varGamma _\infty := \left\{ \left( \begin{array}{cc} 1 &{} b \\ 0 &{} 1 \end{array} \right) \bigg \vert \ b \in {\mathbb Z}\right\} \end{aligned}$$

in \(\varGamma \). As \(\varGamma _\infty \) acts on the unit strip \([0,1] \times (0, \infty )\) to tesselate \({\mathbb H}\), the quotient group \(\varGamma _\infty \backslash \varGamma \) tessellates the unit strip so as to agree with the tessellation of \({\mathbb H}\) given by \(\varGamma \), and we have a canonicalFootnote 2 fundamental domain F that extends to infinity for the \(\varGamma _\infty \backslash \varGamma \) action—to determine F, let the real part of the points of F range between 0 and 1, inclusive of 0. Often we will consider the topological closure \({\overline{F}}\).

There are, in general, a finite number of inequivalent cusps \(\{\kappa _j\}_{j=1}^q \subset {\mathbb R}\cup \{\infty \}\), and the stabilizer in \(\varGamma \) of a cusp \(\kappa _j\) is a parabolic subgroup \(\varGamma _j\) (see, for example, [10, Chap. 6] for the definition of inequivalent cusps). For each inequivalent cusp, we choose \(\sigma _j \in G\) such that \(\sigma _j(\kappa _j) =\infty \), namely taking the cusp \(\kappa _j\) into the standard cusp. (We always choose \(\sigma _1\) to be the identity.) Note that \(\sigma _j\) is not in \(\varGamma \) for any \(j \in \{2, \ldots , q\}\). By modifying \(\sigma _j\) for \(j \in \{2, \ldots , q\}\), we can ensure that

$$\begin{aligned} \sigma _j({\overline{F}}) \cap \{z \in {\mathbb H}: y \ge B\} = [0,1] \times [B, \infty ) \end{aligned}$$
(1.5)

holds for all \(j \in \{1, \ldots , q\}\) and for all \(B \ge B_0>1\) (see [18, (2.2)] or [10, p. 268]). Here \(B_0\) is a fixed constant depending only on \(\varGamma \). Let us denote the j-th cuspidal region in \({\overline{F}}\) by \({\mathscr {C}}_{j,B}\):

$$\begin{aligned}{\mathscr {C}}_{j,B} := \sigma _j^{-1}\left( [0,1] \times [B, \infty )\right) \subset {\overline{F}}.\end{aligned}$$

And define the bounded region of \({\overline{F}}\) by

$$\begin{aligned}F_B:={\overline{F}} - \bigcup _{j=1}^q {\mathscr {C}}_{j,B}.\end{aligned}$$

There is an Eisenstein series \(E^{(j)}(z, s)\) of weight 0 for each inequivalent cusp [10, Definition 3.5, p. 280]:

$$\begin{aligned} E^{(j)}(z,s) :=E_0^{(j)}(z, s):=\sum _{\sigma \in \varGamma _j \backslash \varGamma } ({\text {Im}}(\sigma _j \sigma z))^s, \quad E(z,s):= E^{(1)}(z,s):= E_0^{(1)}(z,s).\end{aligned}$$

The Fourier expansion at the standard cusp is the following (see [15, Lemma 2.6] or [10, p. 280] for example):

$$\begin{aligned} E^{(j)}_0(z,r +i t)&= \delta _{j1}y^{r+it} +\varphi _{j1}(r +it) y^{1-r - it} \\ \nonumber&\quad +\, \sum _{n\ne 0} \psi _{n,j} (r+it) \sqrt{y} K_{r-1/2+it}(2 \pi |n|y)e^{2 \pi i n x}, \end{aligned}$$
(1.6)

where \(\varphi _{j1}(r +it)\) is an element in the scattering matrix \(\Phi (r+it) = (\varphi _{jk}(r+it))\) (cf. [10, Chap. 8]) and \(\psi _{n,j} (r+it)\) are the Fourier coefficients. Since \(E^{(j)}_0(z,r+it)\) has no poles for \(|t|\ge 1\) (see [14, 18]), let \(c_n := \psi _{n,j} (r+it)\).

We first give a bound on the Fourier coefficients of the Eisenstein series, the proof of which only requires our nonuniform bounds on the K-Bessel function (Theorem 1.3 in particular).

Theorem 1.10

Let \(t_0\ge B_0\) be a large constant. For \(N \ge 1\), \(3/2\ge r > 1/2\), and \(|t| \ge t_0\), we have

$$\begin{aligned}\sum _{1 \le |n|\le N}|c_n|^2 = O\left( e^{|t|\pi } (N+|t|)\right) \left\{ \omega (t) + \left( |t| + \frac{N}{|t|}\right) ^{2r-1} \right\} ,\end{aligned}$$

where the implied constant depends only on the lattice subgroup \(\varGamma \) and \(t_0\).

Remark 1.11

Note that [18, Proposition 4.1] gives a bound for the case of \(r=\frac{1}{2}\). Here, \(\omega (t)\) denotes the spectral majorant function whose properties are \(\omega (-R) = \omega (R) \ge 1\) and

$$\begin{aligned} \int _{-T}^{T} \omega (R)~\mathrm {d}{R}= O(T^2) \end{aligned}$$
(1.7)

as \(|T| \rightarrow \infty \) [10, pp. 161, 299, 315]. The implied constant depends only on the lattice subgroup \(\varGamma \).

Finally, we give a bound on the Eisenstein series themselves, the proof of which requires our bounds on the Fourier coefficients and on the K-Bessel function. Note that our uniform bound for the K-Bessel function is essential here.

Theorem 1.12

Let \(j \in \{1, \ldots , q\}\), \(t_0\ge B_0\) be a large constant, \(|t| \ge t_0\), \(\frac{3}{2} \ge r \ge \frac{1}{2}\), \(y >0\), and \(\varepsilon >0\). Then, we have

$$\begin{aligned}&E^{(j)}_0(z, r + it) \\&\quad ={\left\{ \begin{array}{ll} \delta _{j1}y^{1/2+it} + O(y^{1/2})+O\left( y^{-1/2 -\varepsilon }\sqrt{\omega (t)} |t|^{1+\varepsilon } \right) &{} \text { if }\quad r = \frac{1}{2} \text { and } 0<y<1 , \\ \delta _{j1}y^{r+it} + O(y^{1-r})+O(y^{1-r})\left( \left( \frac{|t|}{y}\right) ^{r+1/2} + \frac{|t|}{y} \sqrt{\omega (t)} \right) &{} \text { if }\quad 1 \ge r> \frac{1}{2} \text { and } 0<y<1 , \\ \delta _{j1}y^{r+it} + O(y^{1-r})+O\left( \left( \frac{|t|}{y}\right) ^{2r-1/2} + \left( \frac{|t|}{y}\right) ^r\sqrt{\omega (t)} \right) &{} \text { if }\quad \frac{3}{2} \ge r> 1 \text { and } 0<y<1, \\ \delta _{j1}y^{1/2+it} + O(y^{1/2}) + O\left( |t|^{1+\varepsilon }\sqrt{\omega (t)} \right) &{} \text { if }\quad r = \frac{1}{2} \text { and } 1\le y \le \frac{|t|}{2}, \\ \delta _{j1}y^{r+it} + O(y^{1-r}) + O\left( |t|^{r+1/2} + |t| \sqrt{\omega (t)} \right) &{} \text { if }\quad 1 \ge r> \frac{1}{2} \text { and } 1\le y \le \frac{|t|}{2},\\ \delta _{j1}y^{r+it} + O(y^{1-r})+O\left( |t|^{2r-1/2} + |t|^r \sqrt{\omega (t)} \right) &{} \text { if }\quad \frac{3}{2} \ge r> 1 \text { and } 1\le y \le \frac{|t|}{2}, \\ \delta _{j1}y^{1/2+it} + O(y^{1/2})+O\left( e^{|t|\frac{\pi }{2}-2 \pi y}\right) \left( |t|^{-1/2+\varepsilon }\sqrt{\omega (t)} \right) &{} \text { if }\quad r = \frac{1}{2} \text { and } \frac{|t|}{2}< y,\\ \delta _{j1}y^{r+it} + O(y^{1-r})+O\left( e^{|t|\frac{\pi }{2}-2 \pi y}\right) \left( \sqrt{|t|}+ \frac{\sqrt{\omega (t)}}{\sqrt{|t|}}\right) &{} \text { if }\quad \frac{3}{2} \ge r > \frac{1}{2} \text { and } \frac{|t|}{2} < y, \end{array}\right. } \end{aligned}$$

where the implied constants depend only on the lattice subgroup \(\varGamma \) and \(t_0\).

Remark 1.13

For \( \frac{|t|}{2} < y\), we have an alternative formulation of the theorem:

$$\begin{aligned}&E^{(j)}_0(z, r + it) \\&\quad ={\left\{ \begin{array}{ll} \delta _{j1}y^{1/2+it} + O(y^{1/2})+O\left( e^{|t|\frac{\pi }{2}-2 \pi y}\right) \left( y^{-1}|t|^{1/2+\varepsilon }\sqrt{\omega (t)} \right) &{} \text { if } r = \frac{1}{2} \text { and } \frac{|t|}{2}< y, \\ \delta _{j1}y^{r+it} + O(y^{1-r})+O\left( e^{|t|\frac{\pi }{2}-2 \pi y}\right) \left( y^{-1} \left( |t|^{3/2}+\sqrt{|t|\omega (t)}\right) \right) &{} \text { if } \frac{3}{2} \ge r > \frac{1}{2} \text { and } \frac{|t|}{2} < y.\end{array}\right. } \end{aligned}$$

These bounds on the Eisenstein series give the following corollary:

Corollary 1.14

Let \(j \in \{1, \ldots , q\}\), \(t_0\ge B_0\) be a large constant, \(|t| \ge t_0\), and \(\frac{3}{2} \ge r \ge \frac{1}{2}\). Then, as \(y \rightarrow \infty \), \(E^{(j)}_0(z, r + it)\) decays exponentially (like \(y^{-1}e^{-2 \pi y}\)) to the constant term of its Fourier expansion at a cusp.

Proof

The result is immediate for the Fourier expansion at the standard cusp. For the Fourier expansion at other cusps, the analog of Theorem 1.12 holds with analogous proof. This gives the desired result. \(\square \)

Remark 1.15

We now compare our bounds for the Eisenstein series with those of others.

  1. (1)

    For \(r >1\) and \(t \in {\mathbb R}\), it can be shown that \(E^{(j)}_0(z, r + it) = \delta _{j1}y^{r+it} +O(y^{1-r}) + O((1+y^{-r})e^{-2\pi y})\) where the later implied constant depends on t (and the lattice \(\varGamma \)) [12, Corollary 3.5]. Our bound, however, makes the t dependance (for \(|t| \ge t_0\)) explicit. Also, as \(y \rightarrow \infty \), our result gives faster decay (\(y^{-1}e^{-2 \pi y}\) versus \(e^{-2 \pi y}\)) to the constant term of the Fourier expansion.

  2. (2)

    For \(r= 1/2\), there has been some recent interest on bounds for the Eisenstein series. In particular, the sup-norm problem for certain eigenfunctions has had much interest (see [4, 13, 20] for example). Specifically, for Eisenstein series, there are recent results in [2, 11, 23] of which the most relevant for us is the result by Huang and Xu (generalizing the earlier result of Young) for the modular group \(\varGamma = {\text {PSL}}_2({\mathbb Z})\) [11, Theorem 1.1]:

    $$\begin{aligned} E_0(z, 1/2 + it) = y^{1/2+it} +O(y^{1/2}) + O(y^{-1/2} +t^{3/8+\varepsilon }). \end{aligned}$$

    (As \(\varGamma = {\text {PSL}}_2({\mathbb Z})\) has only one cusp, we have dropped the superscript notation in the Eisenstein series.) Note that the bound on the Eisenstein series given by Huang and Xu does not decay exponentially to the constant term of its Fourier expansion as \(y \rightarrow \infty \). Our bound, however, has this exponential decay.

1.2 Outline of paper

Section 2 is devoted to the proof of Theorems 1.1 and 1.3. Section 3 is devoted to the proof of Theorems 1.5 and 1.6. Section 4 gives a proof of Proposition 1.8. Finally, Sect. 5 gives a proof of Theorems 1.10 and 1.12.

2 Bounds for \(K_\nu (y)\) where \(\mathfrak {I}(\nu )\) large, \(\mathfrak {R}(\nu )\) bounded, and y is real and positive

For background on asymptotic expansions, see [7] (especially Chap. 7) for example. The saddle points and paths of steepest descent for the function \(K_{it}(y)\) (i.e. purely imaginary order) have been obtained by Temme [19]. The saddle points and paths of steepest descent for our function \(K_{\nu }(y)\) are the same as we now show. In addition, we give a proof of the dominant behavior.

In this section (Sect. 2), let us set

$$\begin{aligned}\nu := r +i t\end{aligned}$$

where \(r, t \in {\mathbb R}\). An integral representation for \(K_{\nu }(z)\) (see [21, p. 182 (7)] for example) is

$$\begin{aligned} K_{\nu }(z) = \frac{1}{2} \int _{-\infty }^\infty e^{-z \cosh R - \nu R } ~\mathrm {d}{R} = \frac{1}{2} \int _{-\infty }^\infty e^{-z \cosh R + \nu R } ~\mathrm {d}{R} \end{aligned}$$
(2.1)

where \(z \in {\mathbb C}\backslash \{0\}\) such that \(|\arg (z)|< \frac{\pi }{2}\). There are two cases: \(y \ge t \ge 0\) and \(0 < y \le t\).

Remark 2.1

Note that if \(t <0\), then applying (2.1) allows us to be in one of these two cases.

2.1 First case: \(y \ge t \ge 0\)

Proof of Theorem 1.1

Let us first consider the case \(0< \theta <\pi /2\). Using (2.1), we have

$$\begin{aligned} K_\nu (y) = \frac{1}{2} \int _{-\infty }^\infty e^{-y \varphi (R)} e^{rR} ~\mathrm {d}{R} , \end{aligned}$$
(2.2)

where

$$\begin{aligned}\varphi (R):= \cosh R - i R \sin \theta .\end{aligned}$$

The saddle points (values of R for which \(\varphi '(R)=0\)) are as follows [19] (see also [5, Sect. 2.1] ):

$$\begin{aligned}R_k := i\left( (-1)^k \theta + k\pi \right) , \quad k \in {\mathbb Z}.\end{aligned}$$

Let us now write \(R=u+iw\) and thus we have

$$\begin{aligned} \mathfrak {R}(-\varphi (R)) =&-\cosh u \cos w - w \sin \theta \\ \mathfrak {I}(-\varphi (R)) =&-\sinh u \sin w + u \sin \theta . \end{aligned}$$

The path of steepest descent through the saddle point \(R_0 = i \theta \) is given by \( \mathfrak {I}(-\varphi (R)) = \mathfrak {I}(-\varphi (R_0))\) and is the following curve [19]:

$$\begin{aligned}w = \arcsin \left( \sin \theta \frac{u}{\sinh u} \right) , \quad -\infty< u < \infty .\end{aligned}$$

We remark that \(w'(0)=0\) and that \(w'(u)\) is bounded over all \(-\infty< u <\infty \).

We will apply Laplace’s method, which can be found at [17, p. 127, Theorem 7.1]. Using (2.2), the path of steepest descent as the integration path used in Laplace’s method, and \(R_0=i\theta \) as the saddle point, we see that assumptions (i) – (iv) of Laplace’s method is satisfied.

It remains to show that the final condition (v) is also satisfied. We know that the integration path is a path of steepest descent because along it \(\mathfrak {I}(\varphi (u+iw))\) is constant and, when \(u \rightarrow \pm \infty \), we have that \(\mathfrak {R}(\varphi (u+iw)) \rightarrow \infty \). As \(R_0\) is the only saddle point lying on the path of steepest descent, then \(R_0\) is a global minimum on the path (see [7, p. 66]). Thus, condition (v) is satisfied, and we may apply Laplace’s method to obtain the desired result.

The case \(\theta =0\) is a simplification of the case \(0< \theta < \pi /2\).

Let us now consider the case \(\theta = \pi /2\) (or, equivalently, \(t=y\)). Apply [21, p. 78 (8) and p. 247 (5)] to obtain

$$\begin{aligned}K_{r+iy}(y) \sim \frac{1}{2} \pi i e^{\frac{1}{2}(r+iy)\pi i} \left( - \frac{2}{3\pi } e^{\frac{2}{3} \pi i} \sin (\pi /3) \frac{\varGamma (\frac{1}{3})}{\left( \frac{1}{6} iy\right) ^{1/3}}\right) \end{aligned}$$

as \(y \rightarrow \infty \). Simplifying gives the desired result for the case \(\theta = \pi /2\). This gives the desired result in all cases. \(\square \)

2.2 Second case: \(0< y < t\)

Let us define the constant \(\mu >0\) by \(t = y \cosh \mu \) and the function

$$\begin{aligned}\psi (u) := \cosh u \cos w +w \cosh \mu .\end{aligned}$$

We start by finding the saddle points and a suitable path.

Using (2.1), we have

$$\begin{aligned} K_\nu (y) = \frac{1}{2} \int _{-\infty }^\infty e^{-y \phi (R)} e^{rR} ~\mathrm {d}{R}, \end{aligned}$$
(2.3)

where

$$\begin{aligned}\phi (R):= \cosh R - i R\cosh \mu .\end{aligned}$$

The saddle points (values of R for which \(\phi '(R)=0\)) are as follows [19] (see also [5, Sect. 2.1] ):

$$\begin{aligned}R^\pm _k := \pm \mu + i\left( \frac{\pi }{2} + 2k\pi \right) , \quad k \in {\mathbb Z}.\end{aligned}$$

Let us now write \(R=u+iw\) and thus we have

$$\begin{aligned} \mathfrak {R}(-\phi (R)) =&-\cosh u \cos w -w \cosh \mu = - \psi (u),\\ \mathfrak {I}(-\phi (R)) =&-\sinh u \sin w + u \cosh \mu . \end{aligned}$$

The paths of steepest descent/ascent through the saddle points \(R^\pm _k\) are given by \( \mathfrak {I}(-\phi (R)) = \mathfrak {I}(-\phi (R^\pm _k))\) and are the following family of curves [19]:

$$\begin{aligned} \sin w = \cosh \mu \frac{u}{\sinh u} \pm \frac{\sinh \mu - \mu \cosh \mu }{\sinh u}.\end{aligned}$$

We use only the parts of these curves as shown in [19, Fig. 3.3], which we will refer to as the path of steepest descent. Notice that this path is the union of two branches \({\mathcal {L}}^- \cup {\mathcal {L}}^+\), separated by the imaginary axis, where

$$\begin{aligned} \text { --- }&{\mathcal {L}}^- \text { runs from } -\infty \text { to } 0 \text { and from } 0 \text { to } + i\infty , \\ \text { --- }&{\mathcal {L}}^+ \text { runs from } +i\infty \text { to } 0 \text { and from } 0 \text { to } + \infty . \end{aligned}$$

What is important about this path is that, on both of the branches, the function \(y\phi (R)\) has constant imaginary part, namely

$$\begin{aligned} \chi := \mathfrak {I}(y\phi (R_0^+)):=&y \left( \sinh \mu - \mu \cosh \mu \right) \\ =&y \sinh \mu - t \cosh ^{-1}\left( \frac{t}{y}\right) = \sqrt{t^2 - y^2} - t \cosh ^{-1}\left( \frac{t}{y}\right) , \\ \chi _- := \mathfrak {I}(y\phi (R_0^-)) =&- \chi \end{aligned}$$

for \({\mathcal {L}}^+\) and \({\mathcal {L}}^-\), respectively.

Proof of Theorem 1.3

We will use Laplace’s method, which can be found at [17, p. 127, Theorem 7.1].

Using (2.3), we note that the integral representation is the correct form to apply Laplace’s method. We will use what we called the path of steepest descent as the path of integration; see [19, Fig. 3.3] for the graph. We split this path of integration into two parts, the first from \(-\infty \) to \(i \infty \) and the second from \(i \infty \) to \(\infty \). We apply Laplace’s method separately to the two integration paths and, by the Cauchy-Goursat theorem, add the results together. Let us first consider the integration path from \(i\infty \) to \(\infty \). We see that conditions (i) – (iv) of Laplace’s method are satisfied.

It remains to show that the final condition (v) is also satisfied. We know that the integration path is a path comprised of steepest descent/ascent pieces because along it \(\mathfrak {I}(\phi (u+iw))\) is constant and, when \(u \rightarrow \infty \), we have that \(\mathfrak {R}(\phi (u+iw)) \rightarrow \infty \). As \(R^+_0\) is a saddle point lying on the path, then \(R^+_0\) is a local minimum on the path and, moreover, other local minima occur at the other saddle points (see [7, p. 66]), which for us are \(R^+_k\) where \(k \in {\mathbb N}\). Directly computing, we see that \(\mathfrak {R}(\phi (R^+_k) > \mathfrak {R}(\phi (R^+_0)\) for all \(k \in {\mathbb N}\). Hence, \(R^+_0\) gives the global minimum. Thus, condition (v) is satisfied, and we may apply Laplace’s method to obtain

$$\begin{aligned}e^{-y \phi (R^+_0)} \varGamma \left( \frac{1}{2} \right) \frac{a_0}{\sqrt{y}}\end{aligned}$$

as \(y \rightarrow \infty \) where \(\phi (R) = \cosh R-i R \cosh \mu \) and

$$\begin{aligned}a_0= \frac{e^{rR}}{(2 \phi '')^{1/2}}\end{aligned}$$

evaluated at \(R^+_0\). Computing, we have that the contribution from this part of the path of integration to the dominant term is the following:

$$\begin{aligned}\frac{\sqrt{\pi } e^{r \mu +i r\frac{\pi }{2}}}{\sqrt{2 i y \sinh \mu }} e^{-y\left( \frac{\pi }{2} \cosh \mu +i (\sinh \mu - \mu \cosh \mu ) \right) }.\end{aligned}$$

Likewise, for the other part of the path of integration, Laplace’s method gives

$$\begin{aligned}\frac{\sqrt{\pi } e^{-r \mu +i r\frac{\pi }{2}}}{\sqrt{-2 i y \sinh \mu }} e^{-y\left( \frac{\pi }{2} \cosh \mu -i (\sinh \mu - \mu \cosh \mu ) \right) }.\end{aligned}$$

Here the saddle point which gives the global minimum is \(R^-_0\) and the other saddle points \(R^-_k\) where \(k \in {\mathbb N}\) are larger and can be ignored as before.

Adding these two parts together yields the desired dominant term of the asymptotic expansion for \(K_\nu (y)\). This gives the desired result. \(\square \)

3 Uniform bounds for \(K_\nu (y)\) where \(\mathfrak {I}(\nu )\) large, \(\mathfrak {R}(\nu )\) bounded, and y is real and positive near coalescing saddle points

Already, when \(r=0\), C. Balogh computed a uniform asymptotic expansion which is valid for all cases including the case of two nearby saddle points [3]. Balogh used a technique involving differential equations, but it is not clear that such a technique will work when r is no longer zero. We will use another technique, developed by C. Chester, B. Friedman, and F. Ursell [6], which will yield the uniform dominant and next dominant terms for the case where t and y are nearly equal (or equal) and r bounded.

3.1 First case: \(y \ge t \ge 0\)

We prove Theorem 1.5 in this section. Let

$$\begin{aligned} F(R):=F(R, \theta ):= -\cosh R + iR \sin \theta . \end{aligned}$$

Then we have that

$$\begin{aligned} K_\nu (y) = \frac{1}{2} \int _{-\infty }^\infty e^{y F(R)} e^{rR} ~\mathrm {d}{R}. \end{aligned}$$
(3.1)

The path of steepest descent has been obtained by Temme [19, Fig. 2.1] and the saddle points (values of R for which \(F'(R)=0\)) that are relevant are \(R_0:=i \theta \) and \(R_1:=i(\pi - \theta )\). Note that \(R_0\) and \(R_1\) are close in the complex plane when \(\theta \) is close to \(\pi /2\). The technique used to estimate the K-Bessel function in Theorems 1.1 and 1.3 depends on the distance between \(R_0\) and \(R_1\) and, hence, does not yield a uniform estimate.

To use the Chester–Friedman–Ursell technique, let us introduce

$$\begin{aligned} {\widetilde{\theta }}&= \theta - \pi /2, \\ S&= 2^{-1/3}\left( iR + \pi /2\right) , \end{aligned}$$

where \(2^{1/3}(1 - \cos {\widetilde{\theta }})\) assumes the role of the parameter \(\alpha \) from the Chester-Friedman-Ursell technique (see [6, (3.2)]). Under the change of variable from R to S, the integral becomes

$$\begin{aligned} K_\nu (y) =\int _{-i\infty +2^{-4/3}\pi }^{i\infty +2^{-4/3}\pi } \frac{-i}{2^{2/3}} e^{ir\left( \pi / 2 - 2^{1/3}S\right) } e^{y F(i\pi / 2 - i2^{1/3}S, {\widetilde{\theta }}+\pi / 2)}~\mathrm {d}{S} \end{aligned}$$
(3.2)

and the relevant saddle points become

$$\begin{aligned} S_0&= 2^{-1/3}\left( iR_0 + \pi /2\right) = -2^{-1/3}{\widetilde{\theta }} \\ \nonumber S_1&= 2^{-1/3}\left( iR_1 + \pi /2\right) = 2^{-1/3}{\widetilde{\theta }}. \end{aligned}$$
(3.3)

We will represent \(F(i\pi / 2 - i2^{1/3}S, {\widetilde{\theta }}+\pi / 2)\) by the cubic [6, (2.1)]

$$\begin{aligned} F(i\pi / 2 - i2^{1/3}S, {\widetilde{\theta }}+\pi / 2) = \frac{1}{3} u^3 - \zeta ({\widetilde{\theta }})u+ A({\widetilde{\theta }}) \end{aligned}$$
(3.4)

where, under this representation, the saddle points correspond as follows:

$$\begin{aligned} S_0&\leftrightarrow u=\zeta ^{\frac{1}{2}} ({\widetilde{\theta }}) \\ S_1&\leftrightarrow u=-\zeta ^{\frac{1}{2}} ({\widetilde{\theta }}). \end{aligned}$$

By substitution in (3.4), we have

$$\begin{aligned} F(i\pi / 2 - i2^{1/3}S_0, {\widetilde{\theta }}+\pi / 2)&= -\frac{2}{3} \zeta ^{\frac{3}{2}}({\widetilde{\theta }})+ A({\widetilde{\theta }}), \\ F(i\pi / 2 - i2^{1/3}S_1, {\widetilde{\theta }}+\pi / 2)&= \frac{2}{3} \zeta ^{\frac{3}{2}}({\widetilde{\theta }})+ A({\widetilde{\theta }}), \end{aligned}$$

which yields

$$\begin{aligned} A({\widetilde{\theta }})&= - \frac{\pi }{2} \cos {\widetilde{\theta }} \\ \zeta ({\widetilde{\theta }})&= \left( \frac{3}{2} \left( {\widetilde{\theta }} \cos {\widetilde{\theta }} -\sin {\widetilde{\theta }} \right) \right) ^{2/3}. \end{aligned}$$

Here, we have taken the branch of \(\zeta ({\widetilde{\theta }})\) required by the Chester-Friedman-Ursell technique (see the top of p. 603 in [6]). (And thus \(\zeta ({\widetilde{\theta }})\) are positive real numbers.) Note that

$$\begin{aligned} \zeta ({\widetilde{\theta }}) \sim \frac{{\widetilde{\theta }}^2}{2^{2/3}} \sim 2^{1/3}(1 - \cos {\widetilde{\theta }}) \quad \text { as } {\widetilde{\theta }} \rightarrow 0. \end{aligned}$$

Locally, the representation is analytic in u which yields [6, (2.2)]

$$\begin{aligned} \frac{-i}{2^{2/3}} e^{ir\left( \pi / 2 - 2^{1/3}S\right) } \frac{\mathrm {d}{S}}{\mathrm {d}{u}} = \sum _{m=0}^\infty p_m({\widetilde{\theta }})(u^2 - \zeta )^m + \sum _{m=0}^\infty q_m({\widetilde{\theta }})u (u^2 - \zeta )^m. \end{aligned}$$
(3.5)

Note that \(-{\widetilde{\theta }}\ge 0\). For small enough \(-{\widetilde{\theta }}\) (independent of y and t) [6, Lemma], we have that the dominant term of the asymptotic expansion of \(K_\nu (y)\) is [6, (5.2–5.4), Theorem 2]

$$\begin{aligned}2\pi ie^{yA({\widetilde{\theta }})} p_0({\widetilde{\theta }}) \frac{{\text {Ai}}(y^{2/3}\zeta )}{y^{1/3}}\end{aligned}$$

and the next term is

$$\begin{aligned}-2\pi i e^{yA({\widetilde{\theta }})} q_0({\widetilde{\theta }})\frac{{\text {Ai}}'(y^{2/3}\zeta )}{y^{2/3}}.\end{aligned}$$

We now compute \(p_0({\widetilde{\theta }})\) and \(q_0({\widetilde{\theta }})\). Taking first and second derivatives in (3.4), we have

$$\begin{aligned}&2^{1/3}\left( \cos {\widetilde{\theta }} - \cos (2^{1/3}S) \right) \frac{\mathrm {d}{S}}{\mathrm {d}{u}} = u^2 - \zeta , \\&2^{2/3} \sin (2^{1/3}S) \left( \frac{\mathrm {d}{S}}{\mathrm {d}{u}}\right) ^2 + 2^{1/3}\left( \cos {\widetilde{\theta }} - \cos (2^{1/3}S) \right) \frac{\mathrm {d}{}^2 S}{\mathrm {d}{u}^2} = 2 u. \end{aligned}$$

Substituting the two saddle points into the second derivative equation yields

$$\begin{aligned} \left( \frac{\mathrm {d}{S}}{\mathrm {d}{u}} \bigg |_{u= \zeta ^{1/2}}\right) ^2 =\frac{2^{1/3} \zeta ^{1/2}}{\sin (-{\widetilde{\theta }})}= \left( \frac{\mathrm {d}{S}}{\mathrm {d}{u}} \bigg |_{u= -\zeta ^{1/2}}\right) ^2. \end{aligned}$$

We now wish to determine the signs of the square roots of these two expressions. The Chester–Friedman–Ursell technique gives that our representation is locally uniformly analytic in S and \({\widetilde{\theta }}\), and thus we may take the limits \(S \rightarrow 0\) and \({\widetilde{\theta }} \rightarrow 0\) in either order in the first derivative equation. Now we have that \(S =0 \leftrightarrow u=0\) (see the top of p. 605 in [6]). Taking first \(S \rightarrow 0\), we conclude that

$$\begin{aligned} \frac{\mathrm {d}{S}}{\mathrm {d}{u}} \bigg |_{u= \zeta ^{1/2}} = \sqrt{\frac{2^{1/3} \zeta ^{1/2}}{\sin (-{\widetilde{\theta }})}} = \frac{\mathrm {d}{S}}{\mathrm {d}{u}} \bigg |_{u= -\zeta ^{1/2}} \end{aligned}$$

for all \(-{\widetilde{\theta }}\) small enough.

Now plugging in the two saddle points into (3.5), we solve for \(p_0({\widetilde{\theta }})\) and \(q_0({\widetilde{\theta }})\):

$$\begin{aligned} p_0({\widetilde{\theta }})&= \frac{\frac{-i}{2^{2/3}}\left( e^{ir\left( \pi / 2 - 2^{1/3}S_0\right) } +e^{ir\left( \pi / 2 - 2^{1/3}S_1\right) } \right) \sqrt{\frac{2^{1/3} \zeta ^{1/2}}{\sin (-{\widetilde{\theta }})}} }{2}\\&= \frac{-i}{\sqrt{2}} e^{i r \pi /2} \cos (r {\widetilde{\theta }}) \sqrt{\frac{\zeta ^{1/2}}{\sin (-{\widetilde{\theta }})}}, \\ q_0({\widetilde{\theta }})&= \frac{\frac{-i}{2^{2/3}}\left( e^{ir\left( \pi / 2 - 2^{1/3}S_0\right) } -e^{ir\left( \pi / 2 - 2^{1/3}S_1\right) } \right) \sqrt{\frac{2^{1/3} \zeta ^{1/2}}{\sin (-{\widetilde{\theta }})}} }{2 \zeta ^{1/2}} \\&= \frac{1}{\sqrt{2}} e^{i r \pi /2} \sin (r {\widetilde{\theta }}) \zeta ^{-1/2}\sqrt{\frac{\zeta ^{1/2}}{\sin (-{\widetilde{\theta }})}}. \end{aligned}$$

Thus, the desired dominant term and the next dominant term, respectively, are

$$\begin{aligned}&\frac{\pi \sqrt{2}}{y^{1/3}} e^{-y \frac{\pi }{2} \cos {\widetilde{\theta }} +i r \frac{\pi }{2}} \cos (r {\widetilde{\theta }}) \sqrt{\frac{\zeta ^{1/2}}{\sin (-{\widetilde{\theta }})}} {\text {Ai}}(y^{2/3} \zeta ), \\ \nonumber&\quad \frac{-i\pi \sqrt{2}}{y^{2/3}} e^{-y \frac{\pi }{2} \cos {\widetilde{\theta }} +i r \frac{\pi }{2}} \sin (r {\widetilde{\theta }}) \zeta ^{-1/2}\sqrt{\frac{\zeta ^{1/2}}{\sin (-{\widetilde{\theta }})}} {\text {Ai}}'(y^{2/3} \zeta ). \end{aligned}$$
(3.6)

Finally, to finish the first case, we need to show that outside of a small enough neighborhood of the two saddle points, the integral is negligible (see [6, Sect. 5]). It is a routine calculation to see that, outside of the small enough neighborhood of the two saddle points, the integral is on the order of \(e^{-t {\widetilde{\alpha }}}\) for some \({\widetilde{\alpha }} > \pi /2\). This concludes the proof of the first case, namely Theorem 1.5.

3.2 Second case: \(0< y < t\)

We prove Theorem 1.6 in this section. Let

$$\begin{aligned}G(R):= G(R, \mu ):= -\cosh R + i R\cosh \mu . \end{aligned}$$

Using (2.1), we have

$$\begin{aligned} K_\nu (y) = \frac{1}{2} \int _{-\infty }^\infty e^{y G(R)} e^{rR} ~\mathrm {d}{R}.\end{aligned}$$
(3.7)

The paths of steepest descent/ascent have been obtained by Temme [19], and the saddle points that are relevant are \(R_0 := \mu + i\pi /2\) and \(R_1 := -\mu + i\pi /2\).

It is a routine calculation to see that, outside of a small enough neighborhood of the two saddle points, the integral is on the order of \(e^{-t {\widetilde{\alpha }}}\) for some \({\widetilde{\alpha }} > \pi /2\) and, thus, negligible. To finish, we compute the dominant term and the next dominant term using the Chester–Friedman–Ursell technique. Changing variables

$$\begin{aligned}{\widetilde{\theta }}&= -i \mu , \\ S&= 2^{-1/3}(i R + \pi /2), \end{aligned}$$

in (3.7), we obtain (3.2) and the relevant saddle points become (3.3). Now we have essentially transformed the second case into the first case. There are a few minor differences, which we now state. When \(\mu >0\), the parameter \(2^{1/3}(1 - \cos {\widetilde{\theta }})\) is a negative real number, and thus the Chester–Friedman–Ursell technique requires us to take the branch of \(\zeta ({\widetilde{\theta }})\) for which it is a negative real number. Thus, \(\frac{\zeta ^{1/2}}{\sin (-{\widetilde{\theta }})}\) is a positive real number and the sign of \(\sqrt{\frac{2^{1/3} \zeta ^{1/2}}{\sin (-{\widetilde{\theta }})}}\) is determined in a similar way. Thus, we obtain the dominant and next dominant terms in (3.6). This concludes the proof of the second case, namely Theorem 1.6.

4 Bounds for \(K_{r-1/2+it}(y)\) for \(0<y<1\) and \(1/2 \le r \le 3/2\)

Finally, we give an estimate of \(K_{r-1/2+it}(y)\) for small positive real argument. Recall that we pick \(t_0\ge 1\) to be a fixed large constant (large enough to use the first term in the Stirling asymptotic series for the gamma function for the approximation below).

Proof of Proposition 1.8

Since we only need a bound for small y, it suffices to adapt the bound for purely imaginary order from [5, Sect. 3.1]. It is well-known (see [21, p. 78, (6), p. 77, (2)] for example) that the K-Bessel function is defined as

$$\begin{aligned} K_{\nu }(z) := \frac{1}{2} \pi \frac{I_{-\nu }(z) - I_\nu (z)}{\sin (\nu \pi )}, \end{aligned}$$
(4.1)

where \(I_\nu (z)\) is the modified Bessel function of the first kind

$$\begin{aligned}I_\nu (z) := \sum _{m=0}^\infty \frac{(\frac{1}{2} z)^{\nu +2 m}}{m! \varGamma (\nu + m+1)}.\end{aligned}$$

(The \(\varGamma \) here is the gamma function, not the lattice subgroup.)

We would like to bound \(K_{r - 1/2 +i t}(y)\). An elementary identity gives a lower bound for

$$\begin{aligned} \left| \sin \left( r\pi - 1/2\pi +i t\pi \right) \right| = \left| \frac{-1}{2i} \left( e^{t \pi } e^{-i(r-1/2)\pi }- e^{-t \pi } e^{i(r-1/2)\pi } \right) \right| \ge \frac{1}{2} e^{|t| \pi } - \frac{1}{2} \end{aligned}$$
(4.2)

for all t.

Taking the first term of the Stirling asymptotic series for the gamma function, we have

$$\begin{aligned}\varGamma (s) = \sqrt{2 \pi } s^{s - 1/2} e^{-s} e^{R(s)},\end{aligned}$$

where \(R(s) = o(|s|^{-1})\). Hence, we have

$$\begin{aligned}&|m!\varGamma (r-1/2+ it + m+1)| \\&\quad = 2 \pi m^{m+1/2} e^{-m} |r+1/2+m+it|^{r+m}\\&\qquad \times \, e^{-t \arg (r+1/2+m+it)}e^{-r-1/2-m}e^{o(|r+1/2+m+it|^{-1})} \ge C |t|^r e^{-\frac{|t|\pi }{2}}, \end{aligned}$$

where the constant \(0<C\) depends only on \(t_0\). Note that, since \(r+1/2+m >0\), we have that \(0\le \arg (r+1/2+m+it)\le \pi /2\) for \(t>0\) and \( -\pi /2 \le \arg (r+1/2+m+it)\le 0\) for \(t<0\). Likewise, we have

$$\begin{aligned} |m!\varGamma (-r+1/2- it + m+1)| \ge C |t|^{1-r} e^{-\frac{|t|\pi }{2}}. \end{aligned}$$

Now, for \(0<y<2\), we have that \(\sum _{m=0}^\infty (y/2)^{2m} \le 4/(4 - y^2)\). All of this now implies that

$$\begin{aligned} |K_{r - 1/2 +i t}(y)|\le & {} \pi \frac{\frac{4}{4-y^2}\left( (\frac{y}{2})^{r-1/2} + (\frac{y}{2})^{1/2-r}\right) \frac{e^{|t|\pi /2}}{C \min (|t|^r, |t|^{1-r})}}{e^{|t|\pi } -1} \\\le & {} {\widetilde{C}} y^{1/2-r} e^{-|t|\pi /2} |t|^{r-1}, \end{aligned}$$

where \({\widetilde{C}}\) depends only on \(t_0\) and is uniformly bounded for all large enough \(t_0\). This is the desired result. \(\square \)

5 Eisenstein series

5.1 Bounds on the Fourier coefficients of Eisenstein series

Using our result on the asymptotics of the K-Bessel function, we now give the proof of Theorem 1.10, namely a bound for the sum of the \(c_n\).

Proof of Theorem 1.10

The proof is an adaption of the proof of [18, Proposition 4.1], which is, itself, an adaption of [22, Proposition 5.1]. Let \(0<Y<H\) be given and define

$$\begin{aligned}J:= \int _{\mathscr {D}}\left| E^{(j)}_0(z,s)\right| ^2 \frac{\mathrm {d}{x}~\mathrm {d}{y}}{y^2} \quad \text { where } {\mathscr {D}}:= (0,1) \times (Y,H).\end{aligned}$$

Let \(B := \max (B_0, H, Y^{-1})\). Since \(E^{(j)}_0(z,s)\) is automorphic, we can apply exactly the same proof as in [18, Proposition 4.1] to obtain

$$\begin{aligned}J \le O(1 + Y^{-1}) \int _{F_B} \left| E^{(j)}_0(z,s)\right| ^2 \frac{\mathrm {d}{x}~\mathrm {d}{y}}{y^2},\end{aligned}$$

where, recall, \(F_B\) is the bounded part (i.e. with cusps removed) of \({\overline{F}}\). Let us define the modified Eisenstein series in which we remove the zeroth term of the Fourier expansion:

$$\begin{aligned}E^{(j)}_{0,B}(z,s):= {\left\{ \begin{array}{ll} E^{(j)}_0(z,s) &{} \text {if } z \in F_B, \\ E^{(j)}_0(z,s) - \delta _{jk} \left( Im (\sigma _k z)\right) ^s - \varphi _{jk}(s) \left( Im (\sigma _k z)\right) ^{1-s} &{} \text {if } z \in {\mathscr {C}}_{k,B}. \end{array}\right. }\end{aligned}$$

By the Maass–Selberg relation [10, p. 301 (3.43), p. 281], we have

$$\begin{aligned}&\sum _{j=1}^q \int _{F_B} \left| E^{(j)}_{0,B}(z,s)\right| ^2 \frac{\mathrm {d}{x}~\mathrm {d}{y}}{y^2} \\&\quad = \frac{1}{2r-1}\left( q B^{2r-1} - B^{1 -2r} \sum _{j=1}^q \sum _{j'=1}^q |\varphi _{j j'}(s)|^2\right) + \sum _{j=1}^q Re\left( \overline{\varphi _{j j}(s)}\frac{B^{2it}}{it}\right) . \end{aligned}$$

Applying [10, p. 300 (3.38)] yields

$$\begin{aligned} J \le O(1 + Y^{-1}) \left( B^{2r-1} +\omega (t)\right) .\end{aligned}$$

Substituting the Fourier expansion of the Eisenstein series (1.6) in the definition of J and applying Parseval’s formula yields

$$\begin{aligned} J \ge \sum _{n\ne 0} |c_n|^2 \int _{2 \pi |n| Y}^{2 \pi |n| H} |K_{r-1/2+it}(y)|^2 \frac{\mathrm {d}{y}}{y}.\end{aligned}$$

Let \(Y = |t|/(8 \pi N)\) and \(H = |t|/(4 \pi )\). With this choice, we have

$$\begin{aligned}\left[ |t|/4, |t|/2\right] \subset \left[ 2 \pi |n|Y, 2 \pi |n| H \right] \quad \text { whenever } 1 \le |n| \le N,\end{aligned}$$

and, hence,

$$\begin{aligned}\sum _{1 \le |n|\le N}|c_n|^2 \le C^{-1}J \quad \text { where }\quad C = \int _{|t|/4}^{|t|/2} |K_{r-1/2+it}(y)|^2 \frac{\mathrm {d}{y}}{y}.\end{aligned}$$

Theorem 1.3 now gives that

$$\begin{aligned}C^{-1} \le O(|t| e^{|t| \pi }).\end{aligned}$$

Combining, we obtain the desired result:

$$\begin{aligned}\sum _{1 \le |n|\le N}|c_n|^2 = O\left( e^{|t|\pi } (N+|t|)\right) \left\{ \omega (|t|) + \left( |t| + \frac{N}{|t|}\right) ^{2r-1} \right\} .\end{aligned}$$

\(\square \)

5.2 Bounds on Eisenstein series

We now give the proof of Theorem 1.12, namely a bound for the Eisenstein series themselves. Recall that we defined \(s:=r+it\).

Lemma 5.1

Fix \(M>0\) and \(y_0 >0\). Let \(|r|\le M\) and \(y \ge y_0\). Then

$$\begin{aligned}K_{r+it}(y) = O\left( \frac{e^{-y}}{\sqrt{y}}\right) ,\end{aligned}$$

where the constant depends only on M and \(y_0\).

Proof

When \(|R| > (24 |r| y^{-1})^{1/3}\), we have that

$$\begin{aligned}h(R):=-\frac{y R^4}{24} + rR < 0,\end{aligned}$$

and, thus, on the complement, the function h(R) is bounded by a constant \(N(M, y_0)>0\). Using the integral representation (2.1) for the K-Bessel function, we have

$$\begin{aligned}|K_{r+it}(y)| \le \frac{1}{2} \int _{-\infty }^\infty e^{-y\left( 1 + \frac{R^2}{2} + \frac{R^4}{24}\right) +rR}~\mathrm {d}{R} \le \frac{e^N}{2} \int _{-\infty }^\infty e^{-y\left( 1 + \frac{R^2}{2} \right) }~\mathrm {d}{R}.\end{aligned}$$

The desired result now follows. \(\square \)

The following lemma gives some bounds for the K-Bessel function that are convenient for our proof of Theorem 1.12.

Lemma 5.2

Let \(|t| \ge t_0\). We have

$$\begin{aligned}K_{r-1/2+it}(y) = {\left\{ \begin{array}{ll} O\left( y^{1/2-r} e^{-|t| \frac{\pi }{2}} |t|^{r-1}\right) &{}\text { if }\quad 0<y< 1 \text { and } 3/2 \ge r \ge 1/2, \\ O\left( e^{-|t| \frac{\pi }{2} }|t|^{r-5/6}\right) &{} \text { if }\quad 1 \le y< \frac{\pi }{2} |t| \text { and } 2/3 > r \ge 1/2 , \\ O\left( e^{-|t| \frac{\pi }{2} }|t|^{r-1}\right) &{} \text { if }\quad 1 \le y < \frac{\pi }{2} |t| \text { and } 3/2 \ge r \ge 2/3 ,\\ O\left( \frac{e^{-y}}{\sqrt{y}}\right) &{}\text { if }\quad y \ge \frac{\pi }{2} |t| \text { and } 3/2 \ge r \ge 1/2, \end{array}\right. }\end{aligned}$$

where the implied constants depend on \(t_0\) in first, second, and third branches and has no dependance in the fourth branch.

Proof

For \(0<y<1\), apply Proposition 1.8, and, for \(y \ge \frac{\pi }{2} |t|\), apply Lemma 5.1.

Let us now consider \(|t| \le y < \frac{\pi }{2} |t|\). Let \(\theta _0\) be as in Theorem 1.5. For the range \(\frac{|t|}{\sin (\pi /2 - \theta _0)} \le y < \frac{\pi }{2} |t|\), we apply Theorem 1.1 with the observation that \(-\sqrt{x^2 -1} + \arccos (1/x) < 0\) for \(\frac{\pi }{2}> x>1\) to obtain

$$\begin{aligned}K_{r-1/2+it}(y) = O\left( \frac{e^{-|t| \frac{\pi }{2}}}{(y^2 - |t|^2)^{1/4}}\right) = O\left( \frac{e^{-|t| \frac{\pi }{2}}}{|t|^{1/2}}\right) , \end{aligned}$$

where the implied constant depends only on \(t_0\). Note that the conclusion of Theorem 1.1 is uniform over all \(0 < \theta \le \pi /2 - \theta _1\) for any \(\pi /2>\theta _1>0\). For the range \(|t| \le y \le \frac{|t|}{\sin (\pi /2 - \theta _0)}\), we apply Theorem 1.5 to obtain

$$\begin{aligned} K_{r-1/2+it}(y) = O\left( \frac{e^{-|t| \frac{\pi }{2}}}{|t|^{1/3}}\right) , \end{aligned}$$
(5.1)

where the implied constant depends only on \(t_0\).

Finally, let us consider \(1 \le y \le |t|\). Let \(\mu _0\) be as in Theorem 1.6. For the range \(\frac{|t|}{\cosh \mu _0} \le y \le |t|\), we apply Theorem 1.6 to also obtain (5.1) where, likewise, the implied constant depends only on \(t_0\). For the range \(1 \le y \le \frac{|t|}{\cosh \mu _0}\), we will apply Theorem 1.3 with two observations. The first is that the conclusion

$$\begin{aligned} K_{r+it}(y)&\sim \sqrt{\frac{2\pi }{y \sinh \mu }}e^{-y \frac{\pi }{2} \cosh \mu +i r \frac{\pi }{2}} \left[ \cosh (r \mu ) \sin \left( \frac{\pi }{4} - y \left( \sinh \mu - \mu \cosh \mu \right) \right) \right. \\&\left. \quad - \, i \sinh (r \mu )\cos \left( \frac{\pi }{4} - y \left( \sinh \mu - \mu \cosh \mu \right) \right) \right] \end{aligned}$$

is also valid as \(t \rightarrow \infty \).

The second observation is that the conclusion of Theorem 1.3 is uniform over all positive \(\mu \) bounded away from 0, and, hence, is valid for \(y>0\) arbitrarily close to 0. Applying Theorem 1.3 for the range \(1 \le y \le \frac{|t|}{\cosh \mu _0}\) yields

$$\begin{aligned}K_{r-1/2+it}(y) = O\left( \frac{e^{-|t| \frac{\pi }{2}}|t|^{r-1/2}}{\sqrt{|t|}}\right) = O\left( e^{-|t| \frac{\pi }{2}}|t|^{r-1}\right) \end{aligned}$$

where the implied constant depends only on \(t_0\). This gives the desired result. \(\square \)

We also have the following bound, which we will use in the proof of Theorem 1.12.

Lemma 5.3

For \(3/2 \ge r \ge 1/2\), we have \(\varphi _{jk}(r +it)\) is uniformly bounded for \(|t| \ge 1\).

Proof

Apply [10, p. 301, (a)]. \(\square \)

Following the proof scheme of [18, Proposition 4.2], we can now bound the Eisenstein series:

Proof of Theorem 1.12

We now give the proof for \(\frac{3}{2} \ge r > \frac{1}{2}\), leaving the proof for \(r= \frac{1}{2}\) to the end. Consider three cases: \(0<y<1\), \(1 \le y \le \frac{|t|}{2}\), and \(\frac{|t|}{2} < y\).

The first case is \(0<y<1\). Let us consider the range \(\frac{3}{2} \ge r > 1\) first. By Lemmas 5.3 and 5.2, we obtain the following upper bound for (1.6):

$$\begin{aligned} O(y^{1-r}) + O\left( y^{1-r}e^{-|t|\frac{\pi }{2}}|t|^{r-1} \right) \sum _{n=1}^\infty \left( |c_n| + |c_{-n}| \right) f(n)\end{aligned}$$
(5.2)

where

$$\begin{aligned}f(X) := {\left\{ \begin{array}{ll} 1 &{}\text { if }\quad X < \frac{|t|}{4y}, \\ e^{|t| \frac{\pi }{2} - 2 \pi X y} &{}\text { if }\quad X \ge \frac{|t|}{4y}. \end{array}\right. }\end{aligned}$$

Now define

$$\begin{aligned}S(X) := \sum _{1 \le |n| \le X} |c_n|.\end{aligned}$$

By the fact that f(X) is continuous and monotonically decreasing, that

$$\begin{aligned}\sum _{n=1}^\infty \left( |c_n| + |c_{-n}| \right) \end{aligned}$$

can be written as a telescoping sum, that S(X) is a function of bounded variation on any closed interval, that \(S(1/2)=0\), and that \(f(X) S(X) \rightarrow 0\) as \(X \rightarrow \infty \) (which follows from Theorem 1.10 and the Cauchy–Schwarz inequality), we can apply the definition of the Riemann-Stieltjes integral to obtain the inequality and integration by parts to obtain the equality:

$$\begin{aligned} \sum _{n=1}^\infty \left( |c_n| + |c_{-n}| \right) f(n) \le \int _{1/2}^\infty f(X) \mathrm {d}{S}(X) = -\int _{1/2}^\infty f'(X) S(X) \mathrm {d}{X}. \end{aligned}$$
(5.3)

To bound (5.3), it suffices to estimate S(X) for \(X \ge \frac{|t|}{4y}\) using Theorem 1.10 and the Cauchy-Schwarz inequality:

$$\begin{aligned} S(X) = O\left( e^{|t| \frac{\pi }{2}}\right) \left( X^{r + 1/2}+X \sqrt{\omega (t)}\right) . \end{aligned}$$

Using calculus, we obtain

$$\begin{aligned}\sum _{n=1}^\infty \left( |c_n| + |c_{-n}| \right) f(n) \le O\left( e^{|t|\frac{\pi }{2}}\right) \left( \left( \frac{|t|}{y}\right) ^{r+1/2} +\frac{|t|}{y} \sqrt{\omega (t)}\right) ,\end{aligned}$$

which yields the desired result for the range \(\frac{3}{2} \ge r > 1\).

For the desired result in the range \(1 \ge r > \frac{1}{2}\), replace (5.2) with

$$\begin{aligned} O(y^{1-r}) + O\left( y^{1-r}e^{-|t|\frac{\pi }{2}} \right) \sum _{n=1}^\infty \left( |c_n| + |c_{-n}| \right) f(n)\end{aligned}$$

in the proof for the range \(\frac{3}{2} \ge r > 1\). This proves the first case \(0<y<1\).

The second case is \(1 \le y \le \frac{|t|}{2}\). Replace (5.2) with

$$\begin{aligned} {\left\{ \begin{array}{ll} \delta _{j1}y^{r+it} + O(y^{1-r}) + O\left( \sqrt{y} e^{-|t|\frac{\pi }{2}} \right) \sum _{n=1}^\infty \left( |c_n| + |c_{-n}| \right) f(n) &{} \text { if }\quad 1\ge r> \frac{1}{2}, \\ \delta _{j1}y^{r+it} + O(1) + O\left( \sqrt{y} e^{-|t|\frac{\pi }{2}} |t|^{r-1}\right) \sum _{n=1}^\infty \left( |c_n| + |c_{-n}| \right) f(n) &{} \text { if }\quad \frac{3}{2} \ge r > 1.\end{array}\right. } \end{aligned}$$

In the case \(1 \le y \le \frac{|t|}{2}\), we have that

$$\begin{aligned} S(X) = O\left( e^{|t| \frac{\pi }{2}}\sqrt{y} \right) \left( X^{r + 1/2} y^{r-1/2}+X \sqrt{\omega (t)}\right) , \end{aligned}$$

which yields

$$\begin{aligned}\sum _{n=1}^\infty \left( |c_n| + |c_{-n}| \right) f(n) \le O\left( e^{|t|\frac{\pi }{2}} y^{-1/2}\right) \left( |t|^{r+1/2} +|t| \sqrt{\omega (t)}\right) \end{aligned}$$

and the desired result for the second case \(1 \le y \le \frac{|t|}{2}\).

The third case is \(\frac{|t|}{2} < y\). For \(\frac{3}{2} \ge r > \frac{1}{2}\), replace (5.2) with

$$\begin{aligned}\delta _{j1}y^{r+it} + O(y^{1-r}) + O\left( 1\right) \sum _{n=1}^\infty \left( |c_n| + |c_{-n}| \right) f(n) \end{aligned}$$

and replace the previous f(X) with

$$\begin{aligned}f(X) = \frac{e^{-2\pi X y}}{\sqrt{2 \pi X}}.\end{aligned}$$

In the case that \(\frac{|t|}{2} < y\), we have that

$$\begin{aligned} S(X) = O\left( e^{|t| \frac{\pi }{2}} \right) \left( X^2 + \sqrt{|t|}X^{\frac{3}{2}} + \gamma (t) X + \gamma (t) \sqrt{|t|} X^{\frac{1}{2}}\right) , \end{aligned}$$

where \(\gamma (t):= \sqrt{\omega (t)} +|t|\). Then

$$\begin{aligned} \sum _{n=1}^\infty \left( |c_n| + |c_{-n}| \right) f(n) \le \int _{1}^\infty f(X) \mathrm {d}{S}(X) \end{aligned}$$

holds and the desired result for the third case \(\frac{|t|}{2} < y\) now follows by calculus. This completes the proof of the theorem for \(\frac{3}{2} \ge r > \frac{1}{2}\).

For \(r=\frac{1}{2}\), the proof is analogous to that of \(1 \ge r > \frac{1}{2}\), except we replace Theorem 1.10 with [18, Proposition 4.1], yielding, for every \(\varepsilon >0\), the following estimate for S(X):

$$\begin{aligned}S(X) = O\left( e^{|t| \frac{\pi }{2}} |t|^\varepsilon \sqrt{\omega (t)}X^{\frac{1}{2} + \varepsilon } \sqrt{X + |t|}\right) ,\end{aligned}$$

which holds for every \(X \ge \frac{1}{2}\). With this change, the proofs of the three cases (\(0<y<1\), \(1 \le y \le \frac{|t|}{2}\), and \(\frac{|t|}{2} < y\)) are analogous. This completes the proof of the theorem. \(\square \)