1 Introduction and background

This study is devoted to the asymptotic formula for the quantity \(c_{n,m}\) which denotes the number of weighted integer partitions of n, having exactly \(1\le m\le n\) parts. The weights are a sequence of real numbers \(b_k\), \(k\ge 1\) and the ordinary bivariate generating function f(yz) for the sequence \(c_{n,m}\) is

$$\begin{aligned} f(y,z)=\sum _{n=1}^\infty \sum _{m=1}^n c_{n,m} y^m z^n = \prod _{k=1}^\infty (1-yz^k)^{-b_k}. \end{aligned}$$
(1)

Let

$$\begin{aligned} D(s)= \sum _{k=1}^\infty \frac{b_k}{k^s} \end{aligned}$$

be the Dirichlet generating function for the sequence of weights. We make the following assumptions on D(s):

  1. (i)

    Let \(s=\sigma +it\). For constants \(r>0\) and \(1<C_0<2\) the Dirichlet series D(s) converges in the half-plane \(\sigma>r>0\) and the function D(s) has an analytic continuation to the half-plane

    $$\begin{aligned} \mathcal {H}=\{s:\sigma \ge -C_0\} \end{aligned}$$
    (2)

    on which it is analytic except for a simple pole at \(s=r\) with residue \(A>0\).

  2. (ii)

    There is a constant \(C_1>0\) such that

    $$\begin{aligned} D(s)=O\left( |t|^{C_1}\right) ,\quad t\rightarrow \infty \end{aligned}$$
    (3)

    uniformly in \(s\in \mathcal {H}\).

  3. (iii)

    There is a constant \(C>0\) such that

    $$\begin{aligned} b_k\sim Ck^{r-1}. \end{aligned}$$
    (4)

The first two conditions are similar to assumptions of Meinardus [1], although we have assumed \(1<C_0<2\) in the second condition rather than the slightly weaker assumption of Meinardus [1] that \(0<C_0 < 1\). Meinardus’ third condition did not make any direct assumptions on the \(b_k\). He assumed

  1. (iii)’

    There are constants \(C_2>0\) and \(\nu >0\), such that the function \(g(x)=\sum _{k=1}^\infty b_k e^{-kx}\), \(x=\delta +2\pi i\alpha \), \(\alpha \) real and \(\delta >0\) satisfies

    $$\begin{aligned} \mathfrak {R}(g(x))-g(\delta )\le -C_2\delta ^{-\nu },\quad |\arg (x)|>\pi /4,\ 0\ne |\alpha |\le 1.2, \end{aligned}$$

    for small enough values of \(\delta \).

Meinardus [1] introduced his conditions in an analysis of \(c_n=\sum _{m=1}^n c_{n,m}\) with generating function f(1, z). Granovsky et al. [7] weakened condition (iii)\(^\prime \) and obtained the asymptotics of \(c_n\) under

  1. (iii)”

    For small enough \(\delta >0\) and any \(\mu >0\),

    $$\begin{aligned} \sum _{k=1}^\infty b_ke^{-k\delta }\sin ^2(\pi k\alpha ) \ge \left( 1+\frac{r}{2}+\mu \right) \frac{2}{\log 5}|\log \delta |, \end{aligned}$$

    where \(\sqrt{\delta }\le \alpha \le 1/2\).

Let \(\xi \) be a random variable having distribution

$$\begin{aligned} \mathbb {P}(\xi _n=m)=\frac{c_{n,m}}{c_n},\quad 1\le m\le n. \end{aligned}$$

Haselgrove and Tempereley [2] obtained an expression for \(c_{n,m}\) under several conditions, one of which implies \(r<2\) and conjectured that \(\xi _n\) should have a limiting Gaussian distribution for \(r>2\). Of particular interest is the case \(b_k=k\) for which \(c_n\) is the number of plane partitions of n and \(\xi _n\) is the number of the sum of the diagonal parts; see [3]. Under conditions (i) (with \(0<C_0<1\)), (ii), and (iii)\(^{\prime \prime }\), Mutafchiev [4] found the limiting distribution of \(\xi _n\) for all \(r>0\). The non-Gaussian distributions for \(r<2\) had been discovered previously, as is explained in [4]. The Gaussian distributions for \(r\ge 2\) confirmed the conjecture of [2].

Hwang [5] studied the number of components in a randomly chosen selection, partitions having no repeated parts, assuming Meinardus-type conditions and an analysis of a bivariate generating function analogous to (1).

In this paper we will find asymptotics of \(c_{n,m}\) through an analysis of the bivariate function (1) which adapts the methods used in Granovsky et. al. [6,7,8,9] for finding the asymptotics of the coefficients of univariate functions including f(1, z). The initiator of the method was Meinardus [1]. Our main result is stated in terms of functions of n and m defined in (8) and (9). Let

$$\begin{aligned} \mathrm{Li}_s(z)=\sum _{j=1}^\infty \frac{z^j}{j^s} \end{aligned}$$
(5)

be the polylogarithmic function of order s and define

$$\begin{aligned} \varLambda (\mu )=\mathrm{Li}_{r+1}(e^{-\mu })^{-r} \mathrm{Li}_{r}(e^{-\mu })^{r+1}. \end{aligned}$$

The asymptotic

$$\begin{aligned} \mathrm{Li}_s(e^{-\mu })=e^{-\mu } + O(e^{-2\mu }),\quad \mu \rightarrow \infty , \end{aligned}$$
(6)

which holds uniformly for all \(s\in \mathcal {H}\) results in \(\varLambda (\mu )\sim e^{-\mu }\). The identity which holds for all s

$$\begin{aligned} \frac{\partial {\mathrm{Li}_{s}(e^{-\mu })}}{\partial {\mu }}= -\mathrm{Li}_{s-1}(e^{-\mu }) \end{aligned}$$
(7)

implies that

$$\begin{aligned} \frac{{\text {d}}}{{{\text {d}}\mu }}\varLambda (\mu )&= r{\text {Li}}_{{r + 1}} (e^{{ - \mu }} )^{{ - r - 1}} {\text {Li}}_{r} (e^{{ - \mu }} )^{{r + 2}} - (r + 1){\text {Li}}_{{r + 1}} (e^{{ - \mu }} )^{{ - r}} {\text {Li}}_{r} (e^{{ - \mu }} )^{r} {\text {Li}}_{{r - 1}} (e^{{ - \mu }} ) \\&= - e^{{ - \mu }} + O(e^{{ - 2\mu }} ),\mu \rightarrow \infty , \end{aligned}$$

where the implicit constant in the \(O(\cdot )\) term depends on r. Therefore, for some \(\mu _0>0\), \(\varLambda (\mu )\) decreases monotonically to 0 for when restricted to \(\mu >\mu _0\). Taking now \(\varLambda \) restricted to \((\mu _0,\infty )\), it follows that \(\varLambda \) has an inverse \(\varLambda ^{-1}\). Assuming that \(m=o\left( n^{\frac{r}{r+1}}\right) \) and letting \(h_r=A\varGamma (r)\), for n large enough define

$$\begin{aligned} \mu _{n,m}= \varLambda ^{-1}\left( \frac{r^rm^{r+1}n^{-r}}{h_r}\right) \end{aligned}$$
(8)

and

$$\begin{aligned} \delta _{m,n}= \frac{rm\mathrm{Li}_{r+1}(e^{-\mu _{m,n}})}{n\mathrm{Li}_{r}(e^{-\mu _{m,n}})}. \end{aligned}$$
(9)

Note that \(\mu _{n,m}\rightarrow \infty \) as \(n\rightarrow \infty \) and

$$\begin{aligned} e^{-\mu _{n,m}}\sim \varLambda (\mu _{n,m})\sim r^r m^{r+1}n^{-r}/h_r \end{aligned}$$
(10)

and

$$\begin{aligned} \delta _{n,m}\sim rm/n \end{aligned}$$
(11)

as \(n\rightarrow \infty \).

Theorem 1

Assume conditions (i), (ii) and (iii) above hold. If \(m=m(n)\) is such that

$$\begin{aligned} m=o\left( n^{\frac{r}{r+1}}\right) , \end{aligned}$$
(12)

and

$$\begin{aligned} \lim _{n\rightarrow \infty }m(n)/\log ^{3+\epsilon }n=\infty \end{aligned}$$
(13)

for some \(\epsilon >0\) then

$$\begin{aligned} c_{n,m}\sim \exp \left( (m+1)\mu _{n,m}+n\delta _{n,m} +h_r\delta _{n,m}^{-r} Li_{r+1}(e^{-\mu _{n,m}})\right) \frac{\delta _{n,m}^{r+1}}{2\pi , C\sqrt{r\varGamma (r)}} \end{aligned}$$
(14)

where

$$\begin{aligned} h_r=A\varGamma (r). \end{aligned}$$
(15)

If

$$\begin{aligned} m=o\left( n^{\frac{r}{r+2}}\right) , \end{aligned}$$
(16)

then if we set

$$\begin{aligned} \mu _{n,m}=-\log \left( \frac{r^r m^{r+1}n^{-r}}{h_r}\right) \end{aligned}$$
(17)

and

$$\begin{aligned} \delta _{n,m}=\frac{rm}{n}, \end{aligned}$$
(18)

using (6) in (35) and (37) produces (36) and (38). It follows that under (16) we may use (17) and (18) in Theorem 1 instead of (8) and (9).

If \(r>2\), then Theorem 1 of [4] implies that there is a constant \(\kappa >0\) such that \(\mathbb {P}\left( \log ^{3+\epsilon } n\le \xi _n\le \kappa n^{\frac{r-1}{r+1}}\right) = 1 - o(1)\) and so Theorem 1 covers the significant m with respect to the distribution of \(\xi _n\). However, if \(r\le 2\), then \(\mathbb {P}(\xi _n\le m)=o(1)\) for any m satisfying (12).

The assumption (4) can probably be weakened to, say, \(b_k\asymp k^{r-1}\) and an approximation to \(c_{n,m}\) still obtained, but doing so with the methods of this paper would require at least the derivation or imposition of a lower bound on the left-hand side of (45).

2 A fundamental identity

We will establish an expression for \(c_{n,m}\) which is fundamental for our analysis of \(c_{n,m}\). Define a truncation of f(yz) by

$$\begin{aligned} f_n(y,z)=\prod _{k=1}^n (1-yz^k)^{-b_k}. \end{aligned}$$
(19)

Let \(X_k\) have p.d.f.

$$\begin{aligned} \mathbb {P}(X_k=l) = \left( {\begin{array}{c}b_k+l-1\\ l\end{array}}\right) (1-e^{-\mu -\delta k})e^{-\mu l - \delta kl}, \quad l\ge 0, \end{aligned}$$

a negative binomial distribution with parameters \(b_k\) and \(e^{-\mu -\delta k}\), where the parameters \(\mu >0\), \(\delta >0\) are arbitrary, and let

$$\begin{aligned} Y_n=\sum _{k=1}^n X_k, \quad Z_n=\sum _{k=1}^n kX_k. \end{aligned}$$

Lemma 1

For any \(\mu >0\) and \(\delta >0\) we have

$$\begin{aligned} c_{n,m}= e^{m\mu +n\delta } f_n(e^{-\mu },e^{-\delta }) \, \mathbb {P}(Y_n=m,Z_n=n). \end{aligned}$$

Proof

Observe that

$$\begin{aligned} c_{n,m}= & {} \frac{1}{(2\pi i)^2} \oint \oint \frac{f(y,z)}{y^{m+1}z^{n+1}}dy\,dz\\= & {} e^{m\mu +n\delta }\int _{-1/2}^{1/2}\int _{-1/2}^{1/2} f(e^{-\mu +2\pi i\beta },e^{-\delta +2\pi i\alpha })e^{-2\pi i\alpha n} e^{-2\pi i \beta m}\mathrm{d}\alpha \, \mathrm{d}\beta . \end{aligned}$$

It follows that

$$\begin{aligned} c_{n,m}= e^{m\mu +n\delta }\int _{-1/2}^{1/2}\int _{-1/2}^{1/2} f_n(e^{-\mu +2\pi i\beta },e^{-\delta +2\pi i\alpha })e^{-2\pi i\alpha n} e^{-2\pi i \beta m}\mathrm{d}\alpha \, \mathrm{d}\beta . \end{aligned}$$
(20)

For \(|\alpha |\le 1/2\) and \(|\beta |\le 1/2\) we have

$$\begin{aligned} \mathbb {E}(e^{(2\pi i\alpha +2\pi i\beta k)X_k})= & {} \left( \frac{1-e^{-\mu - \delta k}}{1-e^{-\mu - \delta k + 2\pi i\alpha +2\pi i\beta k}} \right) ^{b_k}\\= & {} \left( \frac{1-e^{-\mu - \delta k}}{1-e^{-\mu +2\pi i\alpha - \delta k + 2\pi i\beta k}} \right) ^{b_k}. \end{aligned}$$

Therefore, the joint characteristic function of \(Z_n\) and \(Y_n\) is

$$\begin{aligned} \phi _n(\alpha ,\beta ):= & {} \mathbb {E}(e^{2\pi i(\alpha Y_n + \beta Z_n)})\nonumber \\= & {} \prod _{k=1}^n \mathbb {E}(e^{2\pi i(\alpha +\beta k)X_k})\nonumber \\= & {} \prod _{k=1}^n \left( \frac{1-e^{-\mu - \delta k}}{1-e^{-\mu +2\pi i\alpha - \delta k + 2\pi i\beta k}} \right) ^{b_k}\nonumber \\= & {} \frac{f_n(e^{-\mu +2\pi i\alpha },e^{-\delta +2\pi i\beta })}{ f_n(e^{-\mu },e^{-\delta })}. \end{aligned}$$
(21)

We now combine (20) and (21) to obtain

$$\begin{aligned} c_{n,m}= & {} e^{m\mu +n\delta } f_n(e^{-\mu },e^{-\delta })\int _{-1/2}^{1/2}\int _{-1/2}^{1/2} \phi _n(\alpha ,\beta ) e^{-2\pi i\alpha m} e^{-2\pi i \beta n}\mathrm{d}\alpha \, \mathrm{d}\beta \\= & {} e^{m\mu +n\delta } f_n(e^{-\mu },e^{-\delta }) \mathbb {P}(Y_n=m,Z_n=n). \end{aligned}$$

\(\square \)

In proving Theorem 1 we take \(\mu =\mu _{n,m}\) and \(\delta =\delta _{n,m}\) given by (8) and (9), giving

$$\begin{aligned} c_{n,m}= e^{m\mu _{n,m}+n\delta _{n,m}} f_n(e^{-\mu _{n,m}},e^{-\delta _{n,m}}) \, \mathbb {P}(Y_n=m,Z_n=n). \end{aligned}$$
(22)

In Sect. 3 we estimate \(f_n(e^{-\mu _{n,m}},e^{-\delta _{n,m}})\) and in Sect. 4 we estimate \(\mathbb {P}(Y_n=m,Z_n=n)\).

3 Asymptotics for the truncated generating function

We first find the asymptotics of \(f(e^{-\mu },e^{-\delta })\).

Lemma 2

We have

$$\begin{aligned} \log f(e^{-\mu },e^{-\delta })&= h_r\delta ^{-r} Li_{r+1}(e^{-\mu }) + h_0Li_1(e^{-\mu }) - h_{-1}\delta \mathrm{Li}_0(e^{-\mu })\nonumber \\&\quad + \varDelta (\mu ,\delta ), \quad \mu ,\delta >0, \end{aligned}$$
(23)

where \(h_r\) is given by (15), \(h_0=D(0)\), \(h_{-1}=D(-1)\), and

$$\begin{aligned} \varDelta (\mu ,\delta )= \frac{1}{2\pi i}\int _{-C_0 -i\infty }^{-C_0+i\infty }\delta ^{-s}\varGamma (s)D(s)\mathrm{Li}_{s+1}(e^{-\mu })\mathrm{d}s =O(\delta ^{C_0}e^{-\mu }). \end{aligned}$$

Proof

Substituting the expression of \(e^{-\delta }\) as the inverse Mellin transform of the Gamma function:

$$\begin{aligned} e^{-\delta }=\frac{1}{2\pi i}\int _{v-i\infty }^{v+i\infty } \delta ^{-s}\varGamma (s)\,\mathrm{d}s,\quad \delta>0, v>0, \end{aligned}$$

with v taken to be \(v= 1+r\) in (19), we obtain

$$\begin{aligned} \log f(e^{-\mu },e^{-\delta })= & {} - \sum _{k=1}^\infty b_k \log (1-e^{-\mu }e^{-\delta k})\nonumber \\= & {} \sum _{k=1}^\infty b_k\sum _{j=1}^\infty \frac{e^{-\mu j}e^{-\delta kj}}{j}\nonumber \\= & {} \sum _{k=1}^\infty b_k\sum _{j=1}^\infty \frac{e^{-\mu j}}{j} \frac{1}{2\pi i}\int _{1+r -i\infty }^{1+r+i\infty }(\delta kj)^{-s}\varGamma (s)\,\mathrm{d}s\nonumber \\= & {} \frac{1}{2\pi i}\int _{1+r -i\infty }^{1+r+i\infty }\delta ^{-s}\varGamma (s)\sum _{k=1}^\infty \frac{b_k}{k^s}\sum _{j=1}^\infty \frac{e^{-\mu j}}{j^{s+1}}\, \mathrm{d}s\nonumber \\= & {} \frac{1}{2\pi i}\int _{1+r -i\infty }^{1+r+i\infty }\delta ^{-s}\varGamma (s)D(s)\mathrm{Li}_{s+1}(e^{-\mu })\mathrm{d}s. \end{aligned}$$
(24)

The function \(\mathrm{Li}_{s+1}(e^{-\mu })\) defined in (5) is analytic for all complex s for each \(\mu >0\) while by condition (i) the function D(s) is assumed to be holomorphic in \(\mathcal {H}\), with a unique simple positive pole r with a positive residue A. The gamma function has simple poles at \(s=0\) and \(s=-1\) with residues 1 and \(-1\), respectively. We will shift the contour of integration in (24) from \(\{s:\mathfrak {R}(s)=1+r\}\) to \(\{s:\mathfrak {R}(s)=-C_0\}\). In performing this shift we use (3), the fact that

$$\begin{aligned} |\mathrm{Li}_{s+1}(e^{-\mu })|\le \sum _{j=1}^\infty \frac{e^{-\mu j}}{j^{1-C_0}}=O(e^{-\mu }), \quad s\in \mathcal {H}, \end{aligned}$$
(25)

and the bound

$$\begin{aligned} \varGamma (s)=O\left( \exp \left( -\frac{\pi }{2}|t|\right) |t|^{C_2}\right) \end{aligned}$$

for a constant \(C_2>0\); see [3]. The Cauchy residue theorem produces (23). \(\square \)

We now are able to find the asymptotics of the second factor of (22).

Lemma 3

Under the assumptions of Theorem 1,

$$\begin{aligned} f_n(e^{-\mu _{n,m}},e^{-\delta _{n,m}}) \sim \exp \left( h_r\delta _{n,m}^{-r}\mathrm{Li}_{r+1}(e^{-\mu _{n,m}})\right) . \end{aligned}$$

Proof

By Lemma 2, we have

$$\begin{aligned} \log f(e^{-\mu _{n,m}},e^{-\delta _{n,m}}) =h_r\delta _{n,m}^{-r}\mathrm{Li}_{r+1}(e^{-\mu _{n,m}})+o(1) \end{aligned}$$

and so, by (1) and (19),

$$\begin{aligned} \log f_n(e^{-\mu _{n,m}},e^{-\delta _{n,m}})&=h_r\delta _{n,m}^{-r}\mathrm{Li}_{r+1}(e^{-\mu _{n,m}})\\&\quad + \sum _{k=n+1}^\infty b_k \log (1-e^{-\mu _{n,m}}e^{-\delta _{n,m} k}) +o(1). \end{aligned}$$

We estimate

$$\begin{aligned} \sum _{k=n+1}^\infty b_k \log (1-e^{-\mu _{n,m}}e^{-\delta _{n,m} k})= & {} O\left( \sum _{k=n+1}^\infty b_k e^{-\mu _{n,m}} e^{-\delta _{n,m} k}\right) \\= & {} O\left( e^{-\mu _{n,m}}\sum _{k=n+1}^\infty k^{r-1} e^{-\delta _{n,m} k}\right) \\= & {} O\left( e^{-\mu _{n,m}}\delta _{n,m,}^{-r} \int _{n\delta _{n,m}}^\infty t^{r-1}e^{-t}\,\mathrm{d}t\right) \\= & {} o\left( e^{-\mu _{n,m}}\delta _{n,m,}^{-r}\right) , \end{aligned}$$

where we have used \(n\delta _{n,m}\rightarrow \infty \) which follows from (11) and (13). \(\square \)

4 The local limit theorem

We have found asymptotics for the first two factors of (22) and we now will find them for the third factor. The proof of the following Local Limit Lemma is similar in places to one in [6].

Lemma 4

(Local Limit Lemma) Under the assumptions of Theorem 1,

$$\begin{aligned} \mathbb {P}(Y_n=m, Z_n=n)\sim \frac{e^{\mu _{n,m}}\delta _{n,m}^{r+1}}{2\pi C\sqrt{r\varGamma (r)}}. \end{aligned}$$
(26)

Proof

Define

$$\begin{aligned} \alpha _0(n)=e^{\mu _{n,m}/2}\delta _{n,m}^{r/2}\log ^{(4+\epsilon )/8} n \end{aligned}$$
(27)

and

$$\begin{aligned} \beta _0(n)=e^{\mu _{n,m}/2}\delta _{n,m}^{r/2+1}\log ^{(8+\epsilon )/16} n. \end{aligned}$$
(28)

The asymptotics (10) and (11) imply

$$\begin{aligned} e^{\mu _{n,m}/2}\delta _{n,m}^{r/2}\asymp (m^{r+1}n^{-r})^{-1/2}(mn^{-1})^{r/2}=m^{-1/2} = o(\log ^{-(3+\epsilon )/2} n) \end{aligned}$$

by (13). Therefore \(\alpha _0(n)=o(1)\) and similarly \(\beta _0(n)=o(1)\). Let

$$\begin{aligned} R_n=[-\alpha _0(n),\alpha _0(n)]\times [-\beta _0(n), \beta _0(n)] \end{aligned}$$

and

$$\begin{aligned} {\overline{R}}_n=([-1/2,1/2]\times [-1/2, 1/2])\setminus R_n. \end{aligned}$$

We express \(\mathbb {P}(Y_n=m, Z_n=n)\) in (22) as

$$\begin{aligned} \mathbb {P}(Y_n=m, Z_n=n)= I_1+I_2, \end{aligned}$$
(29)

where

$$\begin{aligned} I_1= \int \int _R \phi _n(\alpha ,\beta ) e^{-2\pi i(\alpha n + \beta m)}\mathrm{d}\alpha \, \mathrm{d}\beta \end{aligned}$$
(30)

and

$$\begin{aligned} I_2= \int \int _{{\overline{R}}} \phi _n(\alpha ,\beta )e^{-2\pi i(\alpha n + \beta m)}\mathrm{d}\alpha \, \mathrm{d}\beta . \end{aligned}$$
(31)

We will estimate \(I_1\) and \(I_2\) separately.

\(\underline{\hbox {Estimate of I}_1}\)

Expanding \(\log \phi _n(\alpha ,\beta )\) into a Taylor series centred at \(\alpha _0=0\), \(\beta _0=0\) for \((\alpha ,\beta )\in R_n\) gives

$$\begin{aligned} \log \phi _n(\alpha ,\beta )= & {} 2\pi i\alpha (\mathbb {E}Y_n) + 2\pi i\beta (\mathbb {E}Z_n) - 2(\pi \alpha )^2\mathrm{Var}(Y_n)- 2(\pi \beta )^2\mathrm{Var}(Z_n)\nonumber \\&-(2\pi )^2\alpha \beta \mathrm{Cov}(Y_n,Z_n) + O\left( \max _{0\le s\le 3} |\rho _s| \alpha _0^s \beta _0^{3-s}\right) , \end{aligned}$$
(32)

where

$$\begin{aligned} \rho _s= \frac{\partial ^3}{\partial \alpha ^s\partial \beta ^{3-s}}\log \phi _n(\alpha ,\beta )\Big |_{\alpha =0,\beta =0}. \end{aligned}$$

It follows from (21) that

$$\begin{aligned} \mathbb {E}(Y_n)= \frac{1}{2\pi i} \frac{\partial }{\partial \alpha }\log \phi _n(\alpha ,0)\Big |_{ \alpha =0} = -\frac{\partial }{\partial \mu }\log f_n(e^{-\mu },e^{-\delta _{n,m}})\Big |_{ \mu =\mu _{n,m}} \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}(Z_n)= \frac{1}{2\pi i} \frac{\partial }{\partial \beta }\log \phi _n(0,\beta )\Big |_{ \beta =0} = -\frac{\partial }{\partial \delta }\log f_n(e^{-\mu _{n,m}},e^{-\delta })\Big |_{ \delta =\delta _{n,m}}. \end{aligned}$$

Therefore, (1), (19), (23) and an estimate similar to one in the proof of Lemma 3 imply

$$\begin{aligned} \mathbb {E}(Y_n)= & {} -\frac{\partial }{\partial \mu }\log f(e^{-\mu },e^{-\delta _{n,m}})\Big |_{ \mu =\mu _{n,m}} - \sum _{k=n+1}^\infty b_k\frac{e^{-\mu _{n,m}-k\delta _{n,m}}}{1-e^{-\mu _{n,m}-k\delta _{n,m}}}\nonumber \\= & {} h_r\delta _{n,m}^{-r} Li_{r}(e^{-\mu _{n,m}}) + h_0Li_0(e^{-\mu _{n,m}}) - h_{-1}\delta _{n,m}\mathrm{Li}_{-1}(e^{-\mu _{n,m}})\nonumber \\&-\frac{\partial }{\partial \mu }\varDelta (\mu ,\delta _{n,m}) \Big |_{ \mu =\mu _{n,m}}+ O\left( e^{-\mu _{n,m}}\delta _{n,m,}^{-r} \int _{n\delta _{n,m}}^\infty t^{r-1}e^{-t}\,\mathrm{d}t\right) \end{aligned}$$
(33)

and similarly

$$\begin{aligned} \mathbb {E}(Z_n)= & {} -\frac{\partial }{\partial \delta }\log f(e^{-\mu _{n,m}},e^{-\delta })\Big |_{ \delta =\delta _{n,m}} - \sum _{k=n+1}^\infty kb_k\frac{e^{-\mu _{n,m}-k\delta _{n,m}}}{1-e^{-\mu _{n,m}-k\delta _{n,m}}}\nonumber \\= & {} h_rr\delta _{n,m}^{-r-1} Li_{r+1}(e^{-\mu _{n,m}}) + h_{-1}\mathrm{Li}_0(e^{-\mu _{n,m}})\nonumber \\&- \frac{\partial }{\partial \delta }\varDelta (\mu _{n,m},\delta )\Big |_{ \delta =\delta _{n,m}} + O\left( e^{-\mu _{n,m}}\delta _{n,m,}^{-r-1} \int _{n\delta _{n,m}}^\infty t^{r}e^{-t}\,\mathrm{d}t\right) , \end{aligned}$$
(34)

where we have used (7). By using the method of the proof of Lemma 2 of [7] and (25) we obtain

$$\begin{aligned} \frac{\partial }{\partial \mu }\varDelta (\mu ,\delta _{n,m}) \Big |_{ \mu =\mu _{n,m}}= O\left( e^{-\mu _{n,m}}\delta _{n,m}^{C_0}\right) = o(1) \end{aligned}$$

and

$$\begin{aligned} \frac{\partial }{\partial \delta }\varDelta (\mu _{n,m},\delta )\Big |_{ \delta =\delta _{n,m}} = O\left( e^{-\mu _{n,m}}\delta _{n,m}^{C_0-1}\right) = o(1), \end{aligned}$$

where we used the assumption \(C_0>1\) in the last step. Moreover,

$$\begin{aligned} \int _{n\delta _{n,m}}^\infty t^{r-1}e^{-t}\,\mathrm{d}t \le \int _{n\delta _{n,m}}^\infty e^{-t/2}\,\mathrm{d}t = 2 e^{-n\delta _{n,m}/2}, \end{aligned}$$

where the inequality holds for n large enough, and consequently (10), (11) and (13) show that the \(O(\cdot )\) terms in (33) and (34) are of order o(1). It follows from (8) and (9) that

$$\begin{aligned} \mathbb {E}(Y_n)= & {} h_r\delta _{n,m}^{-r} Li_{r}(e^{-\mu _{n,m}}) + o(1) \end{aligned}$$
(35)
$$\begin{aligned}= & {} h_r\left( \frac{rm}{n}\right) ^{-r}\mathrm{Li}_{r+1}(e^{-\mu _{n,m}})^{-r} \mathrm{Li}_r^{r+1}(e^{-\mu _{n,m}}) + o(1)\nonumber \\= & {} h_r\left( \frac{rm}{n}\right) ^{-r}\varLambda (\mu _{n,m}) + o(1)\nonumber \\= & {} h_r\left( \frac{rm}{n}\right) ^{-r}\left( \frac{r^rm^{r+1}n^{-r}}{h_r}\right) +o(1)\nonumber \\= & {} m+o(1) \end{aligned}$$
(36)

and

$$\begin{aligned} \mathbb {E}(Z_n)= & {} h_rr\delta _{n,m}^{-r-1} Li_{r+1}(e^{-\mu _{n,m}}) + o(1) \end{aligned}$$
(37)
$$\begin{aligned}= & {} h_r r \left( \frac{rm}{n}\right) ^{-r-1} \mathrm{Li}_{r+1}(e^{-\mu _{n,m}})^{-r} \mathrm{Li}_r^{r+1}(e^{-\mu _{n,m}}) + o(1)\nonumber \\= & {} h_r r \left( \frac{rm}{n}\right) ^{-r-1} \varLambda (\mu _{n,m})+o(1)\nonumber \\= & {} h_r r \left( \frac{rm}{n}\right) ^{-r-1}\left( \frac{r^rm^{r+1}n^{-r}}{h_r}\right) + o(1)\nonumber \\= & {} n+o(1). \end{aligned}$$
(38)

We also have to estimate the \(|\rho _s|\). We have

$$\begin{aligned} \rho _s= & {} \frac{\partial ^{3}}{\partial \alpha ^s\partial \beta ^{3-s}} \left( -\sum _{k=1}^n b_k\log \left( 1-e^{-\mu _{n,m}+2\pi i\alpha - \delta _{n,m}k + 2\pi i\beta k }\right) \right) \Big |_{\alpha =0,\beta =0}\\= & {} \frac{\partial ^{3}}{\partial \alpha ^s\partial \beta ^{3-s}} \left( \sum _{k=1}^n b_k\sum _{j=1}^\infty \frac{1}{j}e^{j(-\mu _{n,m}+2\pi i\alpha - \delta _{n,m}k + 2\pi i\beta k)}\right) \Big |_{\alpha =0,\beta =0}\\= & {} \sum _{k=1}^n b_k\sum _{j=1}^\infty \frac{1}{j}e^{-j\mu _{n,m}-j\delta _{n,m}k}(2\pi i j)^s(2\pi ikj)^{3-s}, \end{aligned}$$

so

$$\begin{aligned} |\rho _s|\le & {} \sum _{k=1}^n b_k k^{3-s} e^{-\delta _{n,m}k}\sum _{j=1}^\infty \frac{1}{j}e^{-j\mu _{n,m}}(2\pi j)^{3}\\= & {} O\left( e^{-\mu _{n,m}}\sum _{k=1}^nk^{r-s+2}e^{-\delta _{n,m}k}\right) \\= & {} O\left( e^{-\mu _{n,m}}\delta _{n,m}^{-r+s-3}\right) . \end{aligned}$$

Use of (10), (11), (27), and (28) shows that for \(0\le s\le 3\),

$$\begin{aligned} |\rho _s| \alpha _0^s \beta _0^{3-s}= & {} O(e^{\mu _{n,m}/2}\delta _{n,m}^{r/2}\log ^{(24+2\epsilon s+(3-s)\epsilon )/16} n)\\= & {} O\left( \left( m^{r+1}n^{-r}\right) ^{-1/2}\left( mn^{-1}\right) ^{r/2}\log ^{(12+3\epsilon )/8} n\right) \\= & {} O(m^{-1/2}\log ^{(12+3\epsilon )/8} n) \end{aligned}$$

and so (13) results in

$$\begin{aligned} \max _{0\le s\le 3} |\rho _s| \alpha _0^s \beta _0^{3-s}=o(n^{-\epsilon /8}). \end{aligned}$$
(39)

It now follows from (30), (32), (36), (38), and (39) that

$$\begin{aligned} I_1\sim \int \int _{R_n} \exp \left( -2\pi ^2\left\{ \mathrm{Var}(Y_n)\alpha ^2+2\mathrm{Cov}(Y_n,Z_n)\alpha \beta +\mathrm{Var}(Z_n)\beta ^2\right\} \right) \mathrm{d}\alpha \, \mathrm{d}\beta . \end{aligned}$$

Let us define the matrix \(\varSigma _n\) by

$$\begin{aligned} \varSigma _n= \left( \begin{array}{c c} \mathrm{Var}(Y_n)&{}\quad \mathrm{Cov}(Y_n,Z_n)\\ \mathrm{Cov}(Y_n,Z_n)&{}\quad \mathrm{Var}(Z_n) \end{array} \right) \end{aligned}$$

so that

$$\begin{aligned} I_1\sim \int _{R_n} \exp \left( -2\pi ^2 \left( \begin{array}{c} \alpha \\ \beta \end{array} \right) ^\mathrm{T} \varSigma _n \left( \begin{array}{c} \alpha \\ \beta \end{array} \right) \right) \mathrm{d}\alpha \, \mathrm{d}\beta , \end{aligned}$$

where T denotes transpose. Since \(\varSigma _n\) is positive definite and symmetric it has a square root \(\sqrt{\varSigma _n}\). Define the variables u and v by

$$\begin{aligned} \left( \begin{array}{c} u\\ v \end{array} \right) = \sqrt{\varSigma _n} \left( \begin{array}{c} \alpha \\ \beta \end{array} \right) . \end{aligned}$$

Change of variables gives

$$\begin{aligned} I_1\sim & {} \int \int _{S_n} \frac{1}{|\det (\sqrt{\varSigma _n})|} e^{-2\pi ^2(u^2+v^2)}\mathrm{d}u\, \mathrm{d}v\\= & {} \frac{1}{\sqrt{ \mathrm{Var}(Z_n)\mathrm{Var}(Y_n)-\mathrm{Cov}(Z_n,Y_n)^2}} \int \int _{S_n} e^{-2\pi ^2(u^2+v^2)}\mathrm{d}u\, \mathrm{d}v, \end{aligned}$$

where \(S_n\) is the image of \(R_n\) under the map \(\sqrt{\varSigma _n}\). Under the assumption (iii) that \(b_k\sim Ck^{r-1}\), and using \(n\delta _{n,m}\rightarrow \infty \), we have

$$\begin{aligned} \mathrm{Var}(Y_n)= & {} \sum _{k=1}^nb_k\frac{e^{-\mu _{n,m}}e^{-\delta _{n,m}k}}{1-e^{-\mu _{n,m}}e^{-\delta _{n,m}k}} \nonumber \\&\sim e^{-\mu _{n,m}}\sum _{k=1}^n Ck^{r-1}e^{-\delta _{n,m}k} \sim C\varGamma (r)e^{-\mu _{n,m}}\delta _{n,m}^{-r}, \end{aligned}$$
(40)
$$\begin{aligned} \mathrm{Cov}(Y_n,Z_n)= & {} \sum _{k=1}^nk b_k\frac{e^{-\mu _{n,m}}e^{-\delta _{n,m}k}}{1-e^{-\mu _{n,m}}e^{-\delta _{n,m}k}} \nonumber \\&\sim e^{-\mu _{n,m}}\sum _{k=1}^n Ck^{r} e^{-\delta _{n,m}k} \sim C\varGamma (r+1)e^{-\mu _{n,m}}\delta _{n,m}^{-r-1}, \end{aligned}$$
(41)

and

$$\begin{aligned} \mathrm{Var}(Z_n)= & {} \sum _{k=1}^nk^2 b_k\frac{e^{-\mu _{n,m}}e^{-\delta _{n,m}k}}{1+e^{-\mu _{n,m}}e^{-\delta _{n,m}k}} \nonumber \\&\sim e^{-\mu _{n,m}}\sum _{k=1}^n Ck^{r+1} e^{-\delta _{n,m}k} \sim C\varGamma (r+2)e^{-\mu _{n,m}}\delta _{n,m}^{-r-2}. \end{aligned}$$
(42)

We therefore have

$$\begin{aligned} \mathrm{Var}(Z_n)\mathrm{Var}(Y_n)-\mathrm{Cov}(Z_n,Y_n)^2\sim & {} C^2(\varGamma (r+2)\varGamma (r)-\varGamma (r+1)^2)e^{-2\mu _{n,m}}\delta _{n,m}^{-2r -2}\\= & {} C^2r\varGamma (r)e^{-2\mu _{n,m}}\delta _{n,m}^{-2r -2}. \end{aligned}$$

and

$$\begin{aligned} I_1\sim \frac{e^{\mu _{n,m}}\delta _{n,m}^{r+1}}{C\sqrt{r\varGamma (r)}} \int \int _{S_n} e^{-2\pi ^2(u^2+v^2)}\mathrm{d}u\, \mathrm{d}v. \end{aligned}$$

If we show that \(\liminf _{n\rightarrow \infty }S_n=\mathbb {R}^2\), then

$$\begin{aligned} \int \int _{S_n} e^{-2\pi ^2(u^2+v^2)}\mathrm{d}u\, \mathrm{d}v\sim \int \int _{\mathbb {R}^2} e^{-2\pi ^2(u^2+v^2)}\mathrm{d}u\, \mathrm{d}v = (2\pi )^{-1} \end{aligned}$$

will be an immediate consequence and

$$\begin{aligned} I_1\sim \frac{e^{\mu _{n,m}}\delta _{n,m}^{r+1}}{2\pi C\sqrt{r\varGamma (r)}} \end{aligned}$$
(43)

will have been shown. Let \(\partial R_n\) and \(\partial S_n\) denote the boundaries of \(R_n\) and \(S_n\). In view of the identity

$$\begin{aligned} \inf \left\{ u^2+v^2: \left( \begin{array}{c} u\\ v \end{array} \right) ^T\in \partial S_n \right\} = \inf \left\{ \left| \sqrt{\varSigma _n}\left( \begin{array}{c} \alpha \\ \beta \end{array} \right) \right| ^2_2 : \left( \begin{array}{c} \alpha \\ \beta \end{array} \right) ^T\in \partial R_n \right\} , \end{aligned}$$
(44)

where \(|\cdot |_2\) represents \(L_2\) distance, and the fact that \((0,0)\in S_n\), if we show that the right-hand side of (44) converges to \(\infty \) as \(n\rightarrow \infty \), then \(\liminf _{n\rightarrow \infty }S_n=\mathbb {R}^2\) will follow. Observe that

$$\begin{aligned} \inf _{-\beta _0\le \beta \le \beta _0} \left\{ \left| \sqrt{\varSigma _n}\left( \begin{array}{c} \alpha _0\\ \beta \end{array} \right) \right| ^2_2 \right\}&= \inf _{-\beta _0\le \beta \le \beta _0} \left\{ \mathrm{Var}(Y_n)\alpha _0^2+2\mathrm{Cov}(Y_n,Z_n)\alpha _0\beta \right. \\&\left. \qquad \qquad \qquad \quad \quad +\mathrm{Var}(Z_n)\beta ^2 \right\} \\&\ge \inf _{\beta \in \mathbb {R}} \left\{ \mathrm{Var}(Y_n)\alpha _0^2+2\mathrm{Cov}(Y_n,Z_n)\alpha _0\beta +\mathrm{Var}(Z_n)\beta ^2 \right\} . \end{aligned}$$

The last infimum occurs when

$$\begin{aligned} \beta =-\frac{\alpha _0\mathrm{Cov}(Y_n,Z_n)}{\mathrm{Var}(Z_n)} \end{aligned}$$

and so

$$\begin{aligned} \inf _{-\beta _0\le \beta \le \beta _0} \left\{ \left| \sqrt{\varSigma _n}\left( \begin{array}{c} \alpha _0\\ \beta \end{array} \right) \right| ^2_2 \right\}\ge & {} \mathrm{Var}(Y_n)\alpha _0^2-\frac{\alpha _0^2\mathrm{Cov}(Y_n,Z_n)^2}{\mathrm{Var}(Z_n)}\\= & {} \mathrm{Var}(Y_n)\alpha _0^2\left( 1-\frac{\mathrm{Cov}(Y_n,Z_n)^2}{\mathrm{Var}(Y_n) \mathrm{Var}(Z_n)}\right) \end{aligned}$$

Similarly,

$$\begin{aligned} \inf _{-\beta _0\le \beta \le \beta _0} \left\{ \left| \sqrt{\varSigma _n}\left( \begin{array}{c} -\alpha _0\\ \beta \end{array} \right) \right| ^2_2 \right\} \ge \mathrm{Var}(Y_n)\alpha _0^2\left( 1-\frac{\mathrm{Cov}(Y_n,Z_n)^2}{\mathrm{Var}(Y_n) \mathrm{Var}(Z_n)}\right) ,\\ \inf _{-\alpha _0\le \alpha \le \alpha _0} \left\{ \left| \sqrt{\varSigma _n}\left( \begin{array}{c} \alpha \\ \beta _0 \end{array} \right) \right| ^2_2 \right\} \ge \mathrm{Var}(Z_n)\beta _0^2\left( 1-\frac{\mathrm{Cov}(Z_n,Y_n)^2}{\mathrm{Var}(Z_n) \mathrm{Var}(Y_n)}\right) ,\\ \inf _{-\alpha _0\le \alpha \le \alpha _0} \left\{ \left| \sqrt{\varSigma _n}\left( \begin{array}{c} \alpha \\ -\beta _0 \end{array} \right) \right| ^2_2 \right\} \ge \mathrm{Var}(Z_n)\beta _0^2\left( 1-\frac{\mathrm{Cov}(Z_n,Y_n)^2}{\mathrm{Var}(Z_n) \mathrm{Var}(Y_n)}\right) . \end{aligned}$$

We check using (27), (28), (40), (41), (42) that

$$\begin{aligned}&\mathrm{Var}(Y_n)\alpha _0^2\sim ( C\varGamma (r)e^{-\mu _{n,m}}\delta _{n,m}^{-r}) (e^{\mu _{n,m}/2}\delta _{n,m}^{r/2}\log ^{(4+\epsilon )/8} n)^2\\&\quad = C\varGamma (r)\log ^{(4+\epsilon )/4} n,\\&\mathrm{Var}(Z_n)\beta _0^2\sim (C\varGamma (r+2)e^{-\mu _{n,m}}\delta _{n,m}^{-r-2}) (e^{\mu _{n,m}/2}\delta _{n,m}^{r/2+1}\log ^{(8+\epsilon )/16} n)^2\\&\quad =C\varGamma (r+2)\log ^{(8+\epsilon )/8} n, \end{aligned}$$

and

$$\begin{aligned} 1-\frac{\mathrm{Cov}(Z_n,Y_n)^2}{\mathrm{Var}(Z_n) \mathrm{Var}(Y_n)} \sim 1-\frac{\varGamma (r+1)^2}{\varGamma (r)\varGamma (r+2)}>0, \end{aligned}$$
(45)

which implies

$$\begin{aligned} \lim _{n\rightarrow \infty } \inf \left\{ \left| \sqrt{\varSigma _n}\left( \begin{array}{c} \alpha \\ \beta \end{array} \right) \right| ^2_2 : \left( \begin{array}{c} \alpha \\ \beta \end{array} \right) ^T\in \partial R_n \right\} =\infty . \end{aligned}$$

\(\square \)

\(\underline{\hbox {Estimate of I}_2}\)

Similarly to a calculation in the proof of Lemma 3 of [7], we have

$$\begin{aligned} \log |\phi _n(\alpha ,\beta )|= & {} \mathfrak {R}(\log (\phi _n(\alpha ,\beta ))\\= & {} \mathfrak {R}\left( -\sum _{k=1}^n b_k\log \left( \frac{1-e^{-\mu _{n,m}+2\pi i\alpha -\delta _{n,m}k + 2\pi i \beta k}}{1-e^{-\mu _{n,m}-\delta _{n,m}k}} \right) \right) \\= & {} -\frac{1}{2} \sum _{k=1}^n b_k \log \left( 1+\frac{4e^{-\mu _{n,m}-\delta _{n,m}k}\sin ^2(\pi \alpha +\pi \beta k)}{(1-e^{-\mu _{n,m}-\delta _{n,m}k})^2} \right) \\\le & {} -\frac{1}{2} \sum _{k=1}^n b_k \log \left( 1+4e^{-\mu _{n,m}-\delta _{n,m}k}\sin ^2(\pi \alpha +\pi \beta k) \right) \\\le & {} -\frac{\log 5}{2} \sum _{k=1}^n b_k e^{-\mu _{n,m}-\delta _{n,m}k}\sin ^2(\pi \alpha +\pi \beta k) \end{aligned}$$

and the application of (3.70) in [6] gives

$$\begin{aligned} \log |\phi _n(\alpha ,\beta )|\le -2\log 5 \sum _{k=1}^n b_k e^{-\mu _{n,m}-\delta _{n,m}k} \parallel \alpha +\beta k\parallel ^2, \end{aligned}$$
(46)

where \(\{x\}\) is defined to be the fractional part of x and

$$\begin{aligned} \parallel x\parallel = \left\{ \begin{array}{ll} \{ x\}&{}\quad \mathrm{if } \ \{x\}\le 1/2;\\ 1-\{ x\}&{}\quad \mathrm{if }\ \{x\}> 1/2.\\ \end{array} \right. \end{aligned}$$

Define

$$\begin{aligned} V_n(\alpha ,\beta ) = \sum _{k=1}^n b_k e^{-\mu _{n,m}-\delta _{n,m}k}\parallel \alpha +\beta k\parallel ^2. \end{aligned}$$

We will find lower bounds for \(V_n(\alpha ,\beta )\) on four regions which partition \(\overline{R_n}\).

First, suppose that \(\alpha _0< |\alpha |\le 1/2\) and \(|\beta |\le \beta _0\). Note that for such \(\beta \),

$$\begin{aligned} |\beta \delta _{n,m}^{-1}|\le & {} \beta _0\delta _{n,m}^{-1}\\= & {} e^{\mu _{n,m}/2}\delta _{n,m}^{\rho _r/2}\log ^{(8+\epsilon )/16} n\\= & {} o(\alpha _0(n)). \end{aligned}$$

By the definition of \(\parallel \! x\!\parallel \) we have

$$\begin{aligned} \parallel \! x + y\!\parallel \ge \parallel x\parallel - \, |y| \quad \forall x,y\in \mathbb {R}. \end{aligned}$$

Therefore, for all \(1\le k\le \delta _{n,m}^{-1}\),

$$\begin{aligned} \parallel \!\alpha +\beta k\!\parallel\ge & {} \parallel \!\alpha \!\parallel - |\beta \delta _{n,m}^{-1}|\\\ge & {} \alpha _0-|\beta \delta _{n,m}^{-1}|\\= & {} \alpha _0(1+o(1)). \end{aligned}$$

It follows that

$$\begin{aligned} V_n(\alpha ,\beta )\ge & {} (1+o(1))e^{-\mu _{n,m}}\alpha _0^2\sum _{k=1}^{\delta _{n,m}^{-1}}b_k e^{-\delta _{n,m}k}\nonumber \\\sim & {} Ce^{-\mu _{n,m}}\alpha _0^2\sum _{k=1}^{\delta _{n,m}^{-1}}k^{r-1} e^{-\delta _{n,m}k}\nonumber \\&\asymp&e^{-\mu _{n,m}}\alpha _0^2\delta _{n,m}^{-r}\nonumber \\= & {} \log ^{(4+\epsilon )/4} n. \end{aligned}$$
(47)

Suppose that \(|\alpha |\le 1/4\) and \(\beta _0<|\beta |\le \delta _{n,m}\). Define \(F_i(u)=\int _0^u v^{r-1+i}e^{-v}\,dv\), \(x\ge 0\), \(i=0,1,2\). Observe that \(F_i(u)=\frac{u^{r+i}}{r+i}+O(u^{r+i+1})\), \(u\rightarrow 0\), and that therefore

$$\begin{aligned} F_1(u)^2-F_0(u)F_2(u)=-\frac{1}{r(r+2)(r+1)^2}u^{2r+2}+O(u^{2r+3}). \end{aligned}$$

Choose \(0<u_0<1/4\) small enough so that \(F_1(u_0)^2-F_0(u_0)F_2(u_0)<0\). Then for all \(0\le k\le u_0\delta _{n,m}^{-1}\), we have \(|\alpha +\beta k|\le |\alpha |+|\beta k|\le 1/4+1/4=1/2\). Therefore, for all such k, \(\parallel \alpha + \beta k\parallel =|\alpha + \beta k|\) and

$$\begin{aligned} V_n(\alpha ,\beta )&\ge \sum _{k=1}^{u_0\delta _{n,m}^{-1}}b_ke^{-\mu _{n,m}-\delta _{n,m}k}(\alpha +\beta k)^2\\&\sim Ce^{-\mu _{n,m}}\left( \alpha ^2\sum _{k=1}^{u_0\delta _{n,m}^{-1}}k^{r-1}e^{-\delta _{n,m}k}+ 2\alpha \beta \sum _{k=1}^{u_0\delta _{n,m}^{-1}}k^re^{-\delta _{n,m}k}\right. \\&\qquad \qquad \qquad \quad \left. + \beta ^2\sum _{k=1}^{u_0\delta _{n,m}^{-1}}k^{r+1}e^{-\delta _{n,m}k} \right) \\&\sim Ce^{-\mu _{n,m}}\left( \alpha ^2\delta _{n,m}^{-r}F_0(u_0)+ 2\alpha \beta \delta _{n,m}^{-r-1}F_1(u_0)+ \beta ^2\delta _{n,m}^{-r-2}F_2(u_0) \right) \\&= Ce^{-\mu _{n,m}}\delta _{n,m}^{-r-2}\beta ^2\left( F_0(u_0)x^2+ 2F_1(u_0) x +F_3(u_0)\right) ,\\ \end{aligned}$$

where \(x=\alpha \beta ^{-1}\delta _{n,m}\). Because the quadratic \(F_0(u_0)x^2+ 2F_1(u_0) x +F_3(u_0)\) has discriminant \(4(F_1(u_0)^2 - F_0(u_0)F_2(u_0))<0\) and \(F_0(u_0)>0\), there is a constant \(K>0\) such that \(f(x)>K\) for all \(x\in \mathbb {R}\). Therefore, for n large enough we have

$$\begin{aligned} V_n(\alpha ,\beta )>CKe^{-\mu _{n,m}}\delta _{n,m}^{-r-2}\beta _0^2=CK\log ^{(8+\epsilon )/8} n. \end{aligned}$$
(48)

Suppose that \(1/4<|\alpha |\le 1/2\) and \(\beta _0<|\beta |\le \delta _{n,m}\). Then, for \(1\le k\le \delta _{n,m}^{-1}/8\),

$$\begin{aligned} \parallel \!\alpha + \beta k\!\parallel \ge \parallel \!\alpha \!\parallel -|\beta k|\ge 1/4-\delta _{n,m}(\delta _{n,m}^{-1}/8)=1/8 \end{aligned}$$

and

$$\begin{aligned} V_n(\alpha ,\beta )\ge & {} \frac{1}{64}\sum _{k=1}^{\delta _{n,m}^{-1}/8}b_ke^{-\mu _{n,m}-\delta _{n,m}k}\nonumber \\\sim & {} \frac{Ce^{-\mu _{n,m}}}{64}\sum _{k=1}^{\delta _{n,m}^{-1}/8}k^{r-1}e^{-\delta _{n,m}k}\nonumber \\&\asymp&e^{-\mu _{n,m}}\delta _{n,m}^{-r}\nonumber \\&\asymp&(m^{r+1}n^{-r})(m^{-r}n^r)\nonumber \\= & {} m\nonumber \\\ge & {} \log ^{3+\epsilon } n \end{aligned}$$
(49)

for n large enough.

Finally, suppose that \(|\alpha |\le 1/2\) and \(\delta _{n,m}^{-1}<|\beta |\le 1/2\). Define

$$\begin{aligned} Q(\alpha ,\beta ;n)=\{1\le k\le n:\,\parallel \!\alpha +\beta k\!\parallel \ge 1/4\}. \end{aligned}$$

Clearly,

$$\begin{aligned} Q(\alpha ,\beta ;n)= & {} \{1\le k\le n: j+1/4\le \alpha +\beta k\le j+3/4, \quad j=0,1,\ldots \}\\= & {} \{1\le k\le n: j+(1/4-\alpha )\le \beta k\le j+(3/4-\alpha ), \quad j=0,1,\ldots \}. \end{aligned}$$

Routine modifications to the estimates on pages 18 and 19 of [6] which produce the lower bound (3.77) in that paper result in

$$\begin{aligned} \sum _{k=1}^nb_k e^{-\delta _{n,m}k}\parallel \alpha +\beta k\parallel ^2 \ge \eta \delta _{n,m}^{-\rho _r} \end{aligned}$$

for a constant \(\eta >0\) and therefore

$$\begin{aligned} V_n(\alpha ,\beta )\ge \eta e^{-\mu _{n,m}}\delta _{n,m}^{-r}\asymp m>\log ^{3+\epsilon } n \end{aligned}$$
(50)

for n large enough.

Combining (47), (48), (49) and (50) shows that \(V(\alpha ,\beta )\ge \varTheta (\log ^{(8+\epsilon )/8}n)\) uniformly for \((\alpha ,\beta )\in \overline{R_n}\). From this lower bound and (46) it follows that

$$\begin{aligned} I_2 = O\left( \exp (-\varTheta (\log ^{(8+\epsilon )/8}n)\right) . \end{aligned}$$
(51)

Note that (10), (11) and (43) imply

$$\begin{aligned} I_1\asymp (m^{r+1} n^{-r})^{-1} (mn^{-1})^{r+1}=n^{-1}. \end{aligned}$$
(52)

Together, (29), (43), (51) and (52) imply (26). \(\square \)

Proof of Theorem 1

The theorem follows from (22), Lemmas 3 and 4. \(\square \)