Asymptotics of H t
The Gamma function
The Gamma function is defined for [math]\displaystyle{ \mathrm{Re}(s) \gt 0 }[/math] by the formula
- [math]\displaystyle{ \Gamma(s) = \int_0^\infty x^s e^{-x} \frac{dx}{x} }[/math]
and hence by change of variables
- [math]\displaystyle{ \Gamma(s) = \int_{-\infty}^\infty \exp( s u - e^u )\ du. \quad (1.1) }[/math]
It can be extended to other values of [math]\displaystyle{ s }[/math] by analytic continuation or by contour shifting; for instance, if [math]\displaystyle{ Im(s)\gt 0 }[/math], one can write
- [math]\displaystyle{ \Gamma(s) = \int_C \exp( s u - e^u )\ du \quad (1.1') }[/math]
where [math]\displaystyle{ C }[/math] is a contour from [math]\displaystyle{ +i\infty }[/math] to [math]\displaystyle{ \infty }[/math] that stays within a bounded distance of the upper imaginary and right real axes.
The Gamma function obeys the Euler reflection formula
- [math]\displaystyle{ \Gamma(s) \Gamma(1-s) = \frac{\pi}{\sin(\pi s)} \quad (1.2) }[/math]
and the duplication formula
- [math]\displaystyle{ \Gamma(1-s) = \frac{\Gamma(\frac{1-s}{2}) \Gamma(1-\frac{s}{2})}{\pi^{1/2} 2^s}. \quad (1.3) }[/math]
In particular one has
- [math]\displaystyle{ \Gamma(\frac{s}{2}) \Gamma(1-\frac{s}{2}) = \frac{\pi}{\sin(\pi s/2)} \quad (1.4) }[/math]
and thus on combining (3) and (4)
- [math]\displaystyle{ \Gamma(s/2) \Gamma(1-s) = \frac{\pi^{1/2}}{2^s \sin(\pi s/2)} \Gamma(\frac{1-s}{2}) \quad(1.5) }[/math]
Since [math]\displaystyle{ s \Gamma(s) = \Gamma(s+1) }[/math], we have
- [math]\displaystyle{ \frac{s(s-1)}{2} \Gamma(\frac{s}{2}) = 2 \Gamma(\frac{s+4}{2}) - 3 \Gamma(\frac{s+2}{2}). \quad (1.6) }[/math]
We have the Stirling approximation
- [math]\displaystyle{ \Gamma(s) = \sqrt{2\pi/s} \exp( s \log s - s + O(1/|s|) ) }[/math]
whenever [math]\displaystyle{ \mathrm{Re}(s) \gg 1 }[/math]. If we have [math]\displaystyle{ s = \sigma+iT }[/math] for some large [math]\displaystyle{ T }[/math] and bounded [math]\displaystyle{ \sigma \gg 1 }[/math], this gives
- [math]\displaystyle{ \Gamma(s) \approx \sqrt{2\pi} T^{\sigma -1/2} e^{-\pi T/2} \exp(i (T \log T - T + \pi \sigma/2 - \pi/4)). (1.7) }[/math]
Another crude but useful approximation is
- [math]\displaystyle{ \Gamma(s+h) \approx \Gamma(s) s^h (1.8) }[/math]
for [math]\displaystyle{ s }[/math] as above and [math]\displaystyle{ h=O(1) }[/math].
The Riemann-Siegel formula for [math]\displaystyle{ t=0 }[/math]
Proposition 1 (Riemann-Siegel formula for [math]\displaystyle{ t=0 }[/math]) For any natural numbers [math]\displaystyle{ N,M }[/math] and complex number [math]\displaystyle{ s }[/math] that is not an integer, we have
- [math]\displaystyle{ \zeta(s) = \sum_{n=1}^N \frac{1}{n^s} + \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \sum_{m=1}^M \frac{1}{m^{1-s}} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw }[/math]
where [math]\displaystyle{ w^{s-1} := \exp((s-1) \log w) }[/math] and we use the branch of the logarithm with imaginary part in [math]\displaystyle{ [0,2\pi) }[/math], and [math]\displaystyle{ C_M }[/math] is any contour from [math]\displaystyle{ +\infty }[/math] to [math]\displaystyle{ +\infty }[/math] going once anticlockwise around the zeroes [math]\displaystyle{ 2\pi i m }[/math] of [math]\displaystyle{ e^w-1 }[/math] with [math]\displaystyle{ |m| \leq M }[/math], but does not go around any other zeroes.
Proof This equation is in [T1986, p. 82], but we give a proof here. The right-hand side is meromorphic in [math]\displaystyle{ s }[/math], so it will suffice to establish that
- The right-hand side is independent of [math]\displaystyle{ N }[/math];
- The right-hand side is independent of [math]\displaystyle{ M }[/math];
- Whenever [math]\displaystyle{ \mathrm{Re}(s)\gt 1 }[/math] and [math]\displaystyle{ s }[/math] is not an integer, the right-hand side converges to [math]\displaystyle{ \zeta(s) }[/math] if [math]\displaystyle{ M=0 }[/math] and [math]\displaystyle{ N \to \infty }[/math].
We begin with the first claim. It suffices to show that the right-hand sides for [math]\displaystyle{ N }[/math] and [math]\displaystyle{ N-1 }[/math] agree for every [math]\displaystyle{ N \gt 1 }[/math]. Subtracting, it suffices to show that
- [math]\displaystyle{ 0 = \frac{1}{N^s} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} (e^{-Nw} - e^{-(N-1)w})}{e^w-1}\ dw. }[/math]
The integrand here simplifies to [math]\displaystyle{ - w^{s-1} e^{-Nw} }[/math], which on shrinking [math]\displaystyle{ C_M }[/math] to wrap around the positive real axis becomes [math]\displaystyle{ N^{-s} \Gamma(s) (1 - e^{2\pi i(s-1)}) }[/math]. The claim then follows from the Euler reflection formula [math]\displaystyle{ \Gamma(s) \Gamma(1-s) = \frac{\pi}{\sin(\pi s)} }[/math].
Now we verify the second claim. It suffices to show that the right-hand sides for [math]\displaystyle{ M }[/math] and [math]\displaystyle{ M-1 }[/math] agree for every [math]\displaystyle{ M \gt 1 }[/math]. Subtracting, it suffices to show that
- [math]\displaystyle{ 0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \frac{1}{M^{1-s}} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M - C_{M-1}} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw. }[/math]
The contour [math]\displaystyle{ C_M - C_{M-1} }[/math] encloses the simple poles at [math]\displaystyle{ +2\pi i M }[/math] and [math]\displaystyle{ -2\pi i M }[/math], which have residues of [math]\displaystyle{ (2\pi i M)^{s-1} = - i (2\pi M)^{s-1} e^{\pi i s/2} }[/math] and [math]\displaystyle{ (-2\pi i M)^{s-1} = i (2\pi M)^{s-1} e^{3\pi i s/2} }[/math] respectively. So, on canceling the factor of [math]\displaystyle{ M^{s-1} }[/math] it suffices to show that
- [math]\displaystyle{ 0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} + e^{-i\pi s} \Gamma(1-s) (2\pi)^{s-1} i (e^{3\pi i s/2} - e^{\pi i s/2}). }[/math]
But this follows from the duplication formula [math]\displaystyle{ \Gamma(1-s) = \frac{\Gamma(\frac{1-s}{2}) \Gamma(1-\frac{s}{2})}{\pi^{1/2} 2^s} }[/math] and the Euler reflection formula [math]\displaystyle{ \Gamma(\frac{s}{2}) \Gamma(1-\frac{s}{2}) = \frac{\pi}{\sin(\pi s/2)} }[/math].
Finally we verify the third claim. Since [math]\displaystyle{ \zeta(s) = \lim_{N \to \infty} \sum_{n=1}^\infty \frac{1}{n^s} }[/math], it suffices to show that
- [math]\displaystyle{ \lim_{N \to \infty} \int_{C_0} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw = 0. }[/math]
We take [math]\displaystyle{ C_0 }[/math] to be a contour that traverses a [math]\displaystyle{ 1/N }[/math]-neighbourhood of the real axis. Writing [math]\displaystyle{ C_0 = \frac{1}{N} C'_0 }[/math], with [math]\displaystyle{ C'_0 }[/math] independent of [math]\displaystyle{ N }[/math], we can thus write the left-hand side as
- [math]\displaystyle{ \lim_{N \to \infty} N^{-s} \int_{C'_0} \frac{w^{s-1} e^{-w}}{e^{w/N}-1}\ dw, }[/math]
and the claim follows from the dominated convergence theorem. [math]\displaystyle{ \Box }[/math]
Applying the Riemann-Siegel formula to the Riemann xi function [math]\displaystyle{ \xi(s) = \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s) }[/math], we have
- [math]\displaystyle{ \xi(s) = F_{0,N}(s) + \overline{F_{0,M}(\overline{1-s})} + R_{0,N,M}(s) \quad(2.1) }[/math]
where
- [math]\displaystyle{ F_{0,N}(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{1}{n^s} \quad(2.2) }[/math]
and
- [math]\displaystyle{ R_{0,N,M}(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw. \quad(2.3) }[/math]
A contour integral
Lemma 2 Let [math]\displaystyle{ L }[/math] be a line in the direction [math]\displaystyle{ \mathrm{arg} w = \pi/4 }[/math] passing between [math]\displaystyle{ 0 }[/math] and [math]\displaystyle{ 2\pi i }[/math]. Then for any complex [math]\displaystyle{ \alpha }[/math], the contour integral
- [math]\displaystyle{ \Psi(\alpha) := \int_L \frac{\exp( \frac{i}{4\pi} z^2 + \alpha z)}{e^z - 1}\ dz }[/math]
can be given explicitly by the formula
- [math]\displaystyle{ \Psi(\alpha) = 2\pi \frac{\cos \pi(\frac{1}{2} \alpha^2 - \alpha - \frac{\pi}{8})}{\cos(\pi \alpha)} \exp( \frac{i \pi}{2} \alpha^2 - \frac{5 \pi}{8} ) }[/math].
Proof The integrand has a residue of [math]\displaystyle{ 1 }[/math] at [math]\displaystyle{ 0 }[/math], hence on shifting the contour downward by [math]\displaystyle{ 2\pi i }[/math] we have
- [math]\displaystyle{ \Psi(\alpha) = -2\pi i + \int_L \frac{\exp( \frac{i}{4\pi} (z-2\pi i)^2 + \alpha (z-2\pi i) )}{e^z-1}\ dz. }[/math]
The right-hand side expands as
- [math]\displaystyle{ -2\pi i - e^{-2\pi i \alpha} \int_L \frac{\exp( \frac{i}{4\pi} z^2 + (\alpha+1) z)}{e^z-1}\ dz }[/math]
which we can write as
- [math]\displaystyle{ -2\pi i - e^{-2\pi i \alpha} (\Phi(\alpha) + \int_L \exp( \frac{i}{4\pi} z^2 + \alpha z\ dz). }[/math]
The last integral is a standard gaussian integral, which can be evaluated as [math]\displaystyle{ -\sqrt{\frac{\pi}{i/4\pi}} \exp( \pi i \alpha^2) }[/math]. Hence
- [math]\displaystyle{ \Psi(\alpha) = -2\pi i - e^{-2\pi i \alpha} (\Psi(\alpha) - \sqrt{\frac{\pi}{i/4\pi}} \exp( \pi i \alpha^2)), }[/math]
and the claim then follows after some algebra. [math]\displaystyle{ \Box }[/math]
We conclude from (2.3) that
- [math]\displaystyle{ R_{0,N,M}(s) \approx - 2 \Gamma(\frac{5-s}{2}) \frac{\pi^{(-s-1)/2}}{2^s} e^{-\pi i s/2} \exp( -\frac{t \pi^2}{64} ) (2\pi i M)^{s-1} \Phi(\frac{s-2\pi i MN}{2\pi i M}) }[/math]
- [math]\displaystyle{ = i \Gamma(\frac{5-s}{2}) \pi^{-(s+1)/2} \exp( -\frac{t \pi^2}{64} ) (\pi M)^{s-1} \Phi(\frac{s}{2\pi i M} - N). }[/math]
Heuristic approximation at [math]\displaystyle{ t=0 }[/math]
To estimate the remainder term [math]\displaystyle{ R_{0,N,M}(s) }[/math] in (2.3) with [math]\displaystyle{ M,N = \sqrt{\mathrm{Im}(s) / 2\pi} + O(1) }[/math], we make the change of variables [math]\displaystyle{ w = z + 2\pi i M }[/math] to obtain
- [math]\displaystyle{ R_{0,N,M}(s) = \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M - 2\pi i M} \frac{(z+2\pi i M)^{s-1} e^{-Nz}}{e^z-1}\ dz }[/math]
Steepest descent heuristics suggest that the dominant portion of this integral comes when [math]\displaystyle{ z=O(1) }[/math]. In this regime we may Taylor expand
- [math]\displaystyle{ (z+2\pi i M)^{s-1} = (2\pi i M)^{s-1} \exp( (s-1) \log(1 + \frac{z}{2\pi i M}) ) }[/math]
- [math]\displaystyle{ \approx (2\pi i M)^{s-1} \exp( (s-1) \frac{z}{2\pi i M} -\frac{s-1}{2} (\frac{z}{2\pi i M})^2 ) }[/math]
- [math]\displaystyle{ \approx (2\pi i M)^{s-1} \exp( s \frac{z}{2\pi i M} + \frac{i}{4\pi} z^2 ); }[/math]
using this approximation and then shifting the contour to [math]\displaystyle{ -L }[/math] (cf. [T1986, Section 4.16], we conclude that
- [math]\displaystyle{ R_{0,N,M}(s) \approx - \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} (2\pi i M)^{s-1}\int_L \frac{\exp( (\frac{s}{2\pi i M}-N)z + \frac{i}{4\pi} z^2 )}{e^z-1}\ dz }[/math]
and hence by Lemma 2
- [math]\displaystyle{ R_{0,N,M}(s) \approx - \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} (2\pi i M)^{s-1}\Psi(\frac{s}{2\pi i M}-N). (4.1) }[/math]
Using (1.7) one can calculate that this expression has magnitude [math]\displaystyle{ O( x^{6/4} e^{-\pi x/8} ) }[/math].
If we drop the [math]\displaystyle{ R_{0,N,M} }[/math] term, we have
- [math]\displaystyle{ H_0(x+iy) \approx \frac{1}{8} F_{0,N}(\frac{1+ix-y}{2}) + \frac{1}{8} \overline{F_{0,M}(\frac{1+ix+y}{2})}. }[/math]
From (2.2) and (1.7) we have
- [math]\displaystyle{ |\frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2)| \asymp x^{(7-y)/4} e^{-\pi x/8} }[/math]
when [math]\displaystyle{ s = (1+ix-y)/2 }[/math] and
- [math]\displaystyle{ |\frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2)| \asymp x^{(7+y)/4} e^{-\pi x/8} }[/math]
when [math]\displaystyle{ s = (1+ix+y)/2 }[/math]. Thus we expect the second term to dominate, and typically we would expect
- [math]\displaystyle{ |H_0(x+iy)| \asymp x^{(7+y)/4} e^{-\pi x/8}. }[/math]
Extending the Riemann-Siegel formula to positive [math]\displaystyle{ t }[/math]
Evolving [math]\displaystyle{ H_0(z) = \frac{1}{8} \xi(\frac{1+iz}{2}) }[/math] by the backwards heat equation [math]\displaystyle{ \partial_t H_t(z) = -\partial_{zz} H_t(z) }[/math] is equivalent to evolving the Riemann [math]\displaystyle{ \xi }[/math] function [math]\displaystyle{ \xi = \xi_0 }[/math] by the forwards heat equation [math]\displaystyle{ \partial_t \xi_t(s) = \frac{1}{4} \partial_{ss} \xi_t(s) }[/math], and then setting
- [math]\displaystyle{ H_t(z) = \frac{1}{8} \xi_t(1+\frac{iz}{2}). }[/math]
One way to do this is to expand [math]\displaystyle{ \xi_0(s) }[/math] as a linear combination of exponentials [math]\displaystyle{ e^{\alpha s} }[/math], and replace each such exponential by [math]\displaystyle{ \exp( \frac{t}{4} \alpha^2 ) e^{\alpha s} }[/math] to obtain [math]\displaystyle{ \xi_t }[/math]. Roughly speaking, this can be justified as long as everything is absolutely convergent.
In view of (2.1), we will have
- [math]\displaystyle{ \xi_t(s) = F_{t,N}(s) + \overline{F_{t,M}(\overline{1-s})} + R_{t,N,M}(s) \quad(5.1) }[/math]
where [math]\displaystyle{ F_{t,N}, R_{t,N,M} }[/math] are the heat flow evolutions of [math]\displaystyle{ F_{0,N}, R_{0,N,M} }[/math] respectively.
It is easy to evolve [math]\displaystyle{ F_{t,N}(s) }[/math]. Firstly, from (1.6) one has
- [math]\displaystyle{ F_{0,N}(s) = \sum_{n=1}^N 2 \frac{\Gamma(\frac{s+4}{2})}{(\pi n^2)^{s/2}} - 3 2 \frac{\Gamma(\frac{s+2}{2})}{(\pi n^2)^{s/2}} }[/math]
and hence by (1.1')
- [math]\displaystyle{ F_{0,N}(s) = \sum_{n=1}^N 2 \int_C \exp( \frac{s+4}{2} u - e^u - \frac{s}{2} \log(\pi n^2))\ du - 3 \int_C \exp( \frac{s+2}{2} u - e^u - \frac{s}{2} \log(\pi n^2) )\ du. }[/math]
We can now evolve to obtain
- [math]\displaystyle{ F_{t,N}(s) = \sum_{n=1}^N 2 \int_C \exp( \frac{s+4}{2} u - e^u - \frac{s}{2} \log(\pi n^2) + \frac{t}{16} (u - \log(\pi n^2))^2 )\ du - 3 \int_C \exp( \frac{s+2}{2} u - e^u - \frac{s}{2} \log(\pi n^2) + \frac{t}{16} (u - \log(\pi n^2))^2 )\ du (5.2). }[/math]
By integrating on [math]\displaystyle{ C }[/math] rather than the real axis, the integrals remain absolutely convergent here.
Evolving [math]\displaystyle{ R_{0,N,M} }[/math] is a bit trickier. From (1.5) one has
- [math]\displaystyle{ R_{0,N,M}(s) = \frac{s(s-1)}{2} \pi^{-s/2} \frac{e^{-i\pi s} \Gamma(\frac{1-s}{2})}{2^{s+1}\pi^{1/2} i \sin(\pi s/2)} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw }[/math]
which can be rewritten using (1.6) as
- [math]\displaystyle{ 2 \pi^{-s/2} \frac{e^{-i\pi s} \Gamma(\frac{5-s}{2})}{2^{s+1}\pi^{1/2} i \sin(\pi s/2)} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw }[/math]
- [math]\displaystyle{ -3 \pi^{-s/2} \frac{e^{-i\pi s} \Gamma(\frac{3-s}{2})}{2^{s+1}\pi^{1/2} i \sin(\pi s/2)} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw. }[/math]
For [math]\displaystyle{ \mathrm{Im}(s) \gt 0 }[/math], we have the geometric series formula
- [math]\displaystyle{ \frac{1}{\sin(\pi s/2)} = -2i e^{i\pi s/2} \sum_{n=0}^\infty e^{i \pi s n} }[/math]
and from this and (1.1') we can rewrite [math]\displaystyle{ R_{0,N,M}(s) }[/math] as
- [math]\displaystyle{ 2 \sum_{n=0}^\infty \pi^{-s/2} \frac{e^{-i\pi s/2} e^{i \pi s n}}{2^{s}\pi^{1/2} } \int_{\overline{C}} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{5-s}{2} u - e^u)\ dw\ du }[/math]
- [math]\displaystyle{ -3 \sum_{n=0}^\infty \pi^{-s/2} \frac{e^{-i\pi s/2} e^{i \pi s n}}{2^{s}\pi^{1/2}} \int_{\overline{C}} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{3-s}{2} u - e^u)\ dw\ du }[/math]
where [math]\displaystyle{ \overline{C} }[/math] is the complex conjugate of [math]\displaystyle{ C }[/math]. Hence we can write [math]\displaystyle{ R_{t,N,M}(s) }[/math] exactly as
- [math]\displaystyle{ 2 \sum_{n=0}^\infty \pi^{-s/2} \frac{e^{-i\pi s/2} e^{i \pi s n}}{2^{s}\pi^{1/2} \sin(\pi s/2)} \int_{\overline{C}}\int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{5-s}{2} u - e^u + \frac{t}{4} (i \pi (n-1/2) + \log \frac{w}{2\sqrt{\pi}} - \frac{u}{2})^2 )\ dw\ du }[/math]
- [math]\displaystyle{ -3 \sum_{n=0}^\infty \pi^{-s/2} \frac{e^{-i\pi s/2} e^{i \pi s n}}{2^{s}\pi^{1/2} \sin(\pi s/2)} \int_{\overline{C}} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{3-s}{2} u - e^u + \frac{t}{4} (i \pi (n-1/2) + \log \frac{w}{2\sqrt{\pi}} - \frac{u}{2})^2 )\ dw\ du (5.3) }[/math]
Approximation for [math]\displaystyle{ t\gt 0 }[/math]
The above formulae are clearly unwieldy, so let us make a number of heuristic approximations to simplify them. We start with [math]\displaystyle{ F_{t,N}(s) }[/math], assumig that the imaginary part of [math]\displaystyle{ s }[/math] is large and positive and the real part is bounded. We first drop the second term of (5.2) as being lower order:
- [math]\displaystyle{ F_{t,N}(s) \approx \sum_{n=1}^N 2 \int_C \exp( \frac{s+4}{2} u - e^u - \frac{s}{2} \log(\pi n^2) + \frac{t}{16} (u - \log(\pi n^2))^2 )\ du. }[/math]
Next, we shift [math]\displaystyle{ u }[/math] by [math]\displaystyle{ \log \frac{s+4}{2} }[/math] to obtain
- [math]\displaystyle{ F_{t,N}(s) \approx \sum_{n=1}^N \frac{2 \exp( \frac{s+4}{2} \log \frac{s+4}{2} - \frac{s+4}{2})}{(\pi n^2)^{s/2}} \int_C \exp( \frac{s+4}{2} (1 + u - e^u) + \frac{t}{16} (u + \log \frac{s+4}{2\pi n^2})^2 )\ du. }[/math]
Because the expression [math]\displaystyle{ \exp( \frac{s+4}{4} (1+u-e^u) ) }[/math] decays rapidly away from [math]\displaystyle{ u=0 }[/math], we can heuristically approximate
- [math]\displaystyle{ \frac{t}{16} (u + \log \frac{s+4}{2\pi n^2})^2 ) \approx \frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} }[/math]
and then we undo the shift to obtain
- [math]\displaystyle{ F_{t,N}(s) \approx \sum_{n=1}^N \frac{2}{(\pi n^2)^{s/2}} \int_{-\infty}^\infty \exp( \frac{s+4}{2} u - e^u + \frac{t}{16} \log^2\frac{s+4}{2\pi n^2} )\ du }[/math]
which by (1) becomes
- [math]\displaystyle{ F_{t,N}(s) \approx \sum_{n=1}^N \frac{2}{(\pi n^2)^{s/2}} \Gamma(\frac{s+4}{2}) \exp( \frac{t}{16} \log^2\frac{s+4}{2\pi n^2} ).\quad (6.1) }[/math]
Reinstating the lower order term and applying (1.6), we have an alternate form
- [math]\displaystyle{ F_{t,N}(s) \approx \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{\exp( \frac{t}{16} \log^2\frac{s+4}{2\pi n^2})}{n^s}.\quad (6.2) }[/math]
We can perform a similar analysis for [math]\displaystyle{ R_{t,N,M} }[/math]. Again, we drop the second term as being lower order. The [math]\displaystyle{ w }[/math] integrand [math]\displaystyle{ w^{s-1} e^{-Nw} }[/math] attains a maximum at [math]\displaystyle{ w = \frac{s}{N} \approx \sqrt{2\pi \mathrm{Im}(s)} i }[/math] and the [math]\displaystyle{ u }[/math] integrand [math]\displaystyle{ \exp( \frac{s+4}{2} u - e^u ) }[/math] attains a maximum at [math]\displaystyle{ u = \log \frac{s+4}{2} \approx \log \frac{\mathrm{Im}(s)}{2} + i \frac{\pi}{2} }[/math], andhence
- [math]\displaystyle{ \log \frac{w}{2\sqrt{\pi}} - \frac{u}{2} \approx i \pi/4 }[/math]
and so we may heuristically obtain
- [math]\displaystyle{ 2 \sum_{n=0}^\infty \pi^{-s/2} \frac{e^{-i\pi s/2} e^{i \pi s n}}{2^{s}\pi^{1/2} \sin(\pi s/2)} \int_{-\infty}^\infty \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{5-s}{2} u - e^u + \frac{t\pi^2}{64} (4n-1) )\ dw\ du. }[/math]
Because [math]\displaystyle{ e^{i \pi sn} }[/math] decays incredibly rapidly in [math]\displaystyle{ n }[/math], the [math]\displaystyle{ n=0 }[/math] term should dominate, thus giving
- [math]\displaystyle{ 2 \pi^{-s/2} \frac{e^{-i\pi s/2}}{2^{s}\pi^{1/2} \sin(\pi s/2)} \int_{-\infty}^\infty \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{5-s}{2} u - e^u - \frac{t\pi^2}{64} )\ dw\ du. }[/math]
The [math]\displaystyle{ u }[/math] integral can be evaluated by (1.1) to obtain
- [math]\displaystyle{ 2 \pi^{-s/2} \frac{e^{-i\pi s/2} \Gamma(\frac{5-s}{2})}{2^{s}\pi^{1/2} \sin(\pi s/2)} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( - \frac{t\pi^2}{64} )\ dw }[/math]
and so by comparison with (2.3) we have
- [math]\displaystyle{ R_{t,N,M}(s) \approx \exp( - t \pi^2/64) R_{0,N,M}(s). }[/math]
In particular, from (4.1) we have
- [math]\displaystyle{ R_{t,N,M}(s) \approx - \exp( - t \pi^2/64) \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} (2\pi i M)^{s-1} \Psi(\frac{s}{2\pi i M}-N). \quad(6.3) }[/math]
Combining (6.2), (6.3), (5.1) we obtain an approximation to [math]\displaystyle{ \xi_t(s) }[/math] and hence to [math]\displaystyle{ H_t(z) = \xi_t(\frac{1+iz}{2}) }[/math].
To understand these asymptotics better, let us inspect [math]\displaystyle{ H_t(x+iy) }[/math] for [math]\displaystyle{ t\gt 0 }[/math] in the region
- [math]\displaystyle{ x+iy = T + \frac{a+ib}{\log T}; \quad t = \frac{\tau}{\log T} }[/math]
with [math]\displaystyle{ T }[/math] large, [math]\displaystyle{ a,b = O(1) }[/math], and [math]\displaystyle{ \tau \gt \frac{1}{2} }[/math]. If [math]\displaystyle{ s = \frac{1+ix-y}{2} }[/math], then we can approximate
- [math]\displaystyle{ \pi^{-s/2} \approx \pi^{-\frac{1+iT}{4}} }[/math]
- [math]\displaystyle{ \Gamma(\frac{s+4}{2}) \approx \Gamma(\frac{9+iT}{2}) T^{\frac{ia-b}{4 \log T}} = \exp( \frac{ia-b}{4} ) \Gamma(\frac{9+iT}{2}) }[/math]
- [math]\displaystyle{ \frac{1}{n^s} \approx \frac{1}{n^{\frac{1+iT}{2}}} }[/math]
- [math]\displaystyle{ \exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} ) \approx \exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi} - \frac{t}{4} \log T \log n ) }[/math]
- [math]\displaystyle{ \approx \exp( \frac{\tau}{16} \log T + \frac{i \pi \tau}{16} ) \frac{1}{n^{\frac{\tau}{4}}} }[/math]
leading to
- [math]\displaystyle{ F_{t,N}(\frac{1+ix-y}{2}) \approx 2\pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \exp( \frac{ia-b}{4} ) \exp( \frac{\tau}{16} \log T + \frac{i \pi \tau}{16} ) \sum_n \frac{1}{n^{\frac{1+iT}{2} + \frac{\tau}{4}}} }[/math]
- [math]\displaystyle{ \approx 2\pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \zeta(\frac{1+iT}{2} + \frac{\tau}{4}) \exp( \frac{ia-b}{4} ). }[/math]
Similarly for [math]\displaystyle{ F_{t,N}(\frac{1+ix+y}{2}) }[/math] (replacing [math]\displaystyle{ b }[/math] by [math]\displaystyle{ -b }[/math]). If we make a polar coordinate representation
- [math]\displaystyle{ \frac{1}{2} \pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \zeta(\frac{1+iT}{2} + \frac{\tau}{4}) = r_{T,\tau} e^{i \theta_{T,\tau}} }[/math]
one thus has
- [math]\displaystyle{ H_t(x+iy) \approx \frac{1}{2} ( r_{T,\tau} e^{i \theta_{T,\tau}} \exp( \frac{ia-b}{4} ) + r_{T,\tau} e^{-i \theta_{T,\tau}} \exp(\frac{-ia+b}{4}) ) }[/math]
- [math]\displaystyle{ = r_{T,\tau} \cos( \frac{a+ib}{4} + \theta_{T,\tau} ). }[/math]
Thus locally [math]\displaystyle{ H_t(x+iy) }[/math] behaves like a trigonometric function, with zeroes real and equally spaced with spacing [math]\displaystyle{ 4\pi }[/math] (in [math]\displaystyle{ a }[/math]-coordinates) or [math]\displaystyle{ \frac{4\pi}{\log T} }[/math] (in [math]\displaystyle{ x }[/math] coordinates). Once [math]\displaystyle{ \tau }[/math] becomes large, further increase of [math]\displaystyle{ \tau }[/math] basically only increases [math]\displaystyle{ r_{T,\tau} }[/math] and also shifts [math]\displaystyle{ \theta_{T,\tau} }[/math] at rate [math]\displaystyle{ \pi/16 }[/math], causing the number of zeroes to the left of [math]\displaystyle{ T }[/math] to increase at rate [math]\displaystyle{ 1/4 }[/math] as claimed in [KKL2009].
Explicit upper bounds on integrals
We will need effective upper bounds on various integrals that occur as error terms, with explicit constants. Here is a basic tool to do this:
Lemma 3 Let [math]\displaystyle{ \phi: [a,b] \to {\bf C} }[/math] be a smooth function on a compact interval [math]\displaystyle{ [a,b] }[/math]. Let [math]\displaystyle{ \psi: [a,b] \to {\bf C} }[/math] be a measurable function. Let [math]\displaystyle{ I }[/math] denote the integral [math]\displaystyle{ I := \int_a^b e^{\phi(x)} \psi(x)\ dx }[/math].
1. If [math]\displaystyle{ \mathrm{Re} \phi(x) \lt 0 }[/math] for all [math]\displaystyle{ a \leq x \leq b }[/math], then
- [math]\displaystyle{ |I| \leq e^{\mathrm{Re} \phi(a)} \sup_{a \leq x \leq b} \frac{|\psi(x)|}{|\mathrm{Re} \phi'(x)|}. }[/math]
2. If [math]\displaystyle{ \mathrm{Re} \phi(x) \gt 0 }[/math] for all [math]\displaystyle{ a \leq x \leq b }[/math], then
- [math]\displaystyle{ |I| \leq e^{\mathrm{Re} \phi(b)} \sup_{a \leq x \leq b} \frac{|\psi(x)|}{|\mathrm{Re} \phi'(x)|}. }[/math]
3. If there is an point [math]\displaystyle{ x_0 \in (a,b) }[/math] such that [math]\displaystyle{ \mathrm{Re} \phi'(x) }[/math] is negative for [math]\displaystyle{ x \gt x_0 }[/math] and positive for [math]\displaystyle{ x \lt x_0 }[/math] with [math]\displaystyle{ \mathrm{Re} \phi''(x_0) \neq 0 }[/math] (thus [math]\displaystyle{ \mathrm{Re} \phi }[/math] has a non-degenerate maximum at [math]\displaystyle{ x_0 }[/math]), then
- [math]\displaystyle{ |I| \leq 2\sqrt{\pi} e^{\mathrm{Re} \phi(x_0)} \sup_{a \leq x \leq b: x \neq x_0} \frac{|\psi(x)| \sqrt{\mathrm{Re} \phi(x_0) - \mathrm{Re} \phi(x)}}{|\mathrm{Re} \phi'(x)|}. }[/math]
4. With the same hypotheses as part 3, we also have
- [math]\displaystyle{ |I| \leq \sqrt{\pi} e^{\mathrm{Re} \phi(x_0)} \sup_{a \leq x \leq b: x \neq x_0} \frac{|\psi(x)|}{|\mathrm{Re} \phi'(x)| \sqrt{\mathrm{Re} \phi(x_0) - \mathrm{Re} \phi(x)}}. }[/math]
Proof Write [math]\displaystyle{ \Phi := \mathrm{Re} \phi }[/math]. To prove part 1, we may normalise [math]\displaystyle{ \Phi(a)=0 }[/math] and the supremum to be [math]\displaystyle{ 1 }[/math], then [math]\displaystyle{ \Phi }[/math] is decreasing with [math]\displaystyle{ \Phi(b) \lt \Phi(a)=0 }[/math]. By the triangle inequality and change of variables we then have
- [math]\displaystyle{ |I| \leq -\int_a^b e^{\Phi(x)} \Phi'(x)\ dx = \int_{\Phi(b)}^0 e^{-y}\ dy \leq 1 }[/math]
as desired. Part 2 is proven similarly.
To prove Part 3, we may normalise [math]\displaystyle{ \Phi(x_0) = x_0 = 0 }[/math] and the supremum to be 1, then [math]\displaystyle{ \Phi }[/math] is negative on the rest of [math]\displaystyle{ [a,b] }[/math] and by Taylor expansion we may write [math]\displaystyle{ \Phi(x) = - f(x)^2 }[/math] for some smooth [math]\displaystyle{ f: [a,b] \to {\bf R} }[/math] with [math]\displaystyle{ f(0)=0 }[/math] and [math]\displaystyle{ f'(x) \gt 0 }[/math] for all [math]\displaystyle{ x \in [a,b] }[/math]. For any [math]\displaystyle{ x \in [a,b] \backslash \{x_0\} }[/math], we have
- [math]\displaystyle{ |\psi(x)| \leq \frac{|\Phi'(x)|}{\sqrt{-\Phi(x)}} = \frac{2 |f(x)| f'(x)}{|f(x)|} = 2 f'(x) }[/math]
and hence by the triangle inequality and change of variables
- [math]\displaystyle{ |I| \leq 2 \int_a^b e^{-f(x)^2} f'(x)\ dx = 2 \int_{f(a)}^{f(b)} e^{-y^2}\ dy \leq 2 \sqrt{\pi} }[/math]
as desired.
Part 4 is proven similarly to Part 3, except that the upper bound of [math]\displaystyle{ |\psi| }[/math] is now [math]\displaystyle{ 2 f(x)^2 f'(x) }[/math], and one uses the identity [math]\displaystyle{ \int_{-\infty}^{\infty} e^{-y^2} y^2\ dy = \frac{1}{2} \sqrt{\pi} }[/math]. [math]\displaystyle{ \Box }[/math]
Note that one can use monotone convergence to send [math]\displaystyle{ b }[/math] to infinity in part 1, and similarly send [math]\displaystyle{ a }[/math] to negative infinity in part 2. In parts 3 and 4 one can send either [math]\displaystyle{ a }[/math] or [math]\displaystyle{ b }[/math] or both to infinity. The bounds can be tight, as can be seen by setting [math]\displaystyle{ \psi(x)=1 }[/math] (for parts 1,2,3) or [math]\displaystyle{ \psi(x) = x^2 }[/math] (for part 4) and [math]\displaystyle{ \phi(x) }[/math] equal to [math]\displaystyle{ -x }[/math] (for part 1), [math]\displaystyle{ x }[/math] (for part 2), or [math]\displaystyle{ -x^2 }[/math] (for parts 3,4), and sending as many endpoints of integration to infinity as possible.
Estimating [math]\displaystyle{ F_{t,N} }[/math]
From (5.2) we have
- [math]\displaystyle{ F_{t,N}(s) = \sum_{n=1}^N \frac{2}{(\pi n^2)^{s/2}} \exp( \frac{s+4}{2} \log \frac{s+4}{2} - \frac{s+4}{2} + \frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} ) I_t( \frac{s+4}{2}, \log \frac{s+4}{2\pi n^2} ) - \frac{3}{(\pi n^2)^{s/2}} \exp( \frac{s+2}{2} \log \frac{s+2}{2} - \frac{s+2}{2} \frac{t}{16} \log^2 \frac{s+2}{2\pi n^2} ) I_t( \frac{s+2}{2}, \log \frac{s+2}{2\pi n^2} ) }[/math]
where
- [math]\displaystyle{ I_t(s, b) := \int_C \exp( s (1+u-e^u) + \frac{t}{16} ((u+b)^2 -b^2) )\ du. }[/math]
Heuristically we have
- [math]\displaystyle{ I_t(s, b) \approx \exp( - s \log s + s ) \Gamma(s) \approx \sqrt{2\pi/s}. }[/math]
In this section we give effective versions of this heuristic. We shift the contour [math]\displaystyle{ C }[/math] to [math]\displaystyle{ C_1 + C_2 }[/math], where [math]\displaystyle{ C_1 }[/math] traverses [math]\displaystyle{ x \mapsto x - ix }[/math] for [math]\displaystyle{ -\infty \lt x \lt \epsilon }[/math] and [math]\displaystyle{ C_2 }[/math] traverses [math]\displaystyle{ x \mapsto x - i \epsilon }[/math] for [math]\displaystyle{ \epsilon \lt x \lt \infty }[/math], and [math]\displaystyle{ \epsilon \gt 0 }[/math] is a parameter at our disposal. We first consider the [math]\displaystyle{ C_2 }[/math] contribution, which is
- [math]\displaystyle{ \int_\epsilon^\infty \exp( \phi(x) )\ dx }[/math]
where
- [math]\displaystyle{ \phi(x) := s (1 + x - i\epsilon - e^x e^{-i\epsilon} ) + \frac{t}{16} ( (x + b - i\epsilon)^2 - b^2). }[/math]
Writing [math]\displaystyle{ s = \sigma+iT }[/math] (we assume [math]\displaystyle{ T \gt \sigma \gt 0 }[/math]) and [math]\displaystyle{ b = \alpha+i\beta }[/math], the real part [math]\displaystyle{ \Phi }[/math] of this phase is
- [math]\displaystyle{ \Phi(x) := \sigma (1 + x ) + \epsilon T - e^x (\sigma \cos \epsilon + T \sin \epsilon ) + \frac{t}{16} ((x + \alpha)^2 - (\epsilon+\beta)^2 - \alpha^2 + \beta^2). }[/math]
In particular
- [math]\displaystyle{ \Phi'(x) := \sigma - e^x (\sigma \cos \epsilon + T \sin \epsilon ) + \frac{t}{8} (x + \alpha). }[/math]
Assuming
- [math]\displaystyle{ \epsilon \leq 1.292; \quad T \sin \epsilon \gt \max(\alpha,1) t/8,\quad (7.1) }[/math]
we have [math]\displaystyle{ e^x \cos \varepsilon \geq e^\varepsilon \cos \varepsilon \geq 1 }[/math] and [math]\displaystyle{ x+\alpha \leq \max(\alpha,1) e^x }[/math], so
- [math]\displaystyle{ \Phi'(x) \leq - e^x (T \sin \epsilon - \max(\alpha,1) t/8) \leq e^\epsilon (T \sin \epsilon - \max(\alpha,1) t/8) }[/math]
and hence by Lemma 3.1
- [math]\displaystyle{ |\int_\epsilon^\infty \exp( \phi(x) )\ dx| \leq \frac{1}{e^\epsilon (T \sin \epsilon - \max(\alpha,1) t/8)} \exp(\Phi(\epsilon)). \quad (7.2) }[/math]
One has
- [math]\displaystyle{ \Phi(x) = \sigma(1+\epsilon - e^\epsilon \cos \epsilon) + T (\epsilon - e^\epsilon \sin \epsilon) + \epsilon (\alpha-\beta) \frac{t}{8} \quad (7.3). }[/math]
Asymptotically, one roughly has [math]\displaystyle{ \Phi(\epsilon) \approx - \epsilon^2 T / 2 }[/math], so this estimate should become good once [math]\displaystyle{ \epsilon }[/math] is significantly larger than [math]\displaystyle{ T^{-1/2} }[/math].
Now we consider the [math]\displaystyle{ C_1 }[/math] integral, which is
- [math]\displaystyle{ (1-i) \int_{-\infty}^\epsilon \exp( \tilde \phi(x) )\ dx }[/math]
where
- [math]\displaystyle{ \tilde \phi(x) := s (1 + x - ix - e^x e^{-ix} ) + \frac{t}{16} ( (x + b - ix)^2 - b^2). }[/math]
The real part [math]\displaystyle{ \tilde \Phi }[/math] is
- [math]\displaystyle{ \tilde \Phi(x) := \sigma (1 + x ) + x T - e^x (\sigma \cos x + T \sin x ) + \frac{t}{16} ((x + \alpha)^2 - (x+\beta)^2 - \alpha^2 + \beta^2), }[/math]
and the derivative is
- [math]\displaystyle{ \tilde \Phi'(x) := \sigma (1 - e^x \cos x + e^x \sin x) + T (1 - e^x \sin x - e^x \cos x) + \frac{t}{8} (\alpha - \beta). }[/math]
We first control the contribution of the integral from [math]\displaystyle{ -\infty }[/math] to [math]\displaystyle{ -\epsilon' }[/math] for some parameter [math]\displaystyle{ \epsilon'\gt 0 }[/math] of our choosing. The expression [math]\displaystyle{ 1 - e^x \cos x + e^x \sin x }[/math] is nonnegative. If
- [math]\displaystyle{ \epsilon' \lt 0.771 \quad (7.4) }[/math]
then [math]\displaystyle{ 1 - e^x \sin x - e^x \cos x \geq 1 + e^{-\epsilon'} \sin \epsilon' - e^{-\epsilon'} \cos \epsilon' }[/math] and hence
- [math]\displaystyle{ \tilde \Phi'(x) \geq X \quad (7.5) }[/math]
where
- [math]\displaystyle{ X :=T (1 + e^{-\epsilon'} \sin \epsilon' - e^{-\epsilon'} \cos \epsilon') + \frac{t}{8} (\alpha - \beta). \quad (7.6) }[/math]
Assuming [math]\displaystyle{ X }[/math] is positive, Lemma 3.1 then gives that this portion of the [math]\displaystyle{ C_1 }[/math] integral has magnitude at most
- [math]\displaystyle{ \frac{\sqrt{2}}{X} \exp( \tilde \Phi(-\epsilon') ). \quad (7.7) }[/math]
Asymptotically one has [math]\displaystyle{ X \approx 2 \epsilon' T }[/math] and [math]\displaystyle{ \tilde \Phi(-\epsilon') \approx -(\epsilon')^2 T }[/math], so as previously this becomes small when [math]\displaystyle{ \epsilon' }[/math] is significantly larger than [math]\displaystyle{ T^{-1/2} }[/math].
Finally we deal with the integral between [math]\displaystyle{ -\epsilon' }[/math] and [math]\displaystyle{ \epsilon }[/math], which we write as
- [math]\displaystyle{ (1-i) \int_{-\epsilon'}^{\epsilon} e^{\phi_1(x)} \psi(x)\ dx }[/math]
where
- [math]\displaystyle{ \phi_1(x) := s (1 + x - ix - e^x e^{-ix} ) }[/math]
and
- [math]\displaystyle{ \psi(x) := \exp( \frac{t}{16} ( (x + b - ix)^2 - b^2) ). }[/math]
The real part [math]\displaystyle{ \Phi_1 }[/math] of [math]\displaystyle{ \phi_1 }[/math] is
- [math]\displaystyle{ \Phi_1(x) := \sigma (1 + x - e^x \cos x) + T (x - e^x \sin x), }[/math]
and the derivative is
- [math]\displaystyle{ \Phi'_1(x) = \sigma (1 - e^x \cos x + e^x \sin x) + T (1 - e^x \sin x - e^x \cos x). }[/math]
For x between 0.771 and 1.292, one can check that
- [math]\displaystyle{ |1 - e^x \cos x + e^x \sin x| \leq |1 - e^x \sin x - e^x \cos x| }[/math]
so
- [math]\displaystyle{ |\Phi'_1(x)| \geq (T-\sigma) |1 - e^x \sin x - e^x \cos x| \quad (7.7) }[/math]
Similarly one can check that
- [math]\displaystyle{ |1 + x - e^x \cos x| \leq |x - e^x \sin x| }[/math]
and so
- [math]\displaystyle{ \Phi_1(0) - \Phi_1(x) \leq (T+\sigma) (e^x \sin x - x); }[/math]
since
- [math]\displaystyle{ \frac{\sqrt{(e^x \sin x-x)}}{|1 - e^x \sin x - e^x \cos x|} \leq 1 }[/math]
(actually we can improve this bound almost a factor of two if we need to) and we can crudely bound
- [math]\displaystyle{ \psi(x) \leq \exp( \frac{t}{16} \max( \epsilon (\alpha-\beta), \epsilon' (\beta-\alpha) ) ) }[/math]
we thus have from Lemma 3.3 that the contribution of this integral is at most
- [math]\displaystyle{ 2 \sqrt{2 \pi} \frac{\sqrt{T+\sigma}}{T-\sigma} \exp( \frac{t}{16} \max( \epsilon (\alpha-\beta), \epsilon' (\beta-\alpha) ) )\quad (7.8). }[/math]
Thus
- [math]\displaystyle{ |I_t(s,b)| \leq (7.2) + (7.7) + (7.8). }[/math]
This bound should be fine for the second term in the expansion of [math]\displaystyle{ F_{t,N}(s) }[/math]. For the first term we need to extract the main term for [math]\displaystyle{ I_t(s,b) }[/math], not just get an upper bound. For this we observe that
- [math]\displaystyle{ \int_C \exp( s (1+u-e^u) )\ du = \exp( s - s \log s ) \Gamma(s) }[/math]
and hence
- [math]\displaystyle{ I_t(s,b) = \exp( s - s \log s ) \Gamma(s) + \tilde I_t(s,b) }[/math]
where
- [math]\displaystyle{ \tilde I_t(s,b) := \int_C \exp( s (1+u-e^u) ) (\exp( \frac{t}{16} ((u+b)^2 -b^2) ) - 1)\ du. }[/math]
We can integrate by parts to obtain
- [math]\displaystyle{ \tilde I_t(s,b) = \frac{1}{s} \int_C \exp( s (1+u-e^u) ) \tilde \psi(u)\ du }[/math]
where
- [math]\displaystyle{ \tilde \psi(u) := \frac{d}{du} \frac{\exp( \frac{t}{16} ((u+b)^2 -b^2) ) - 1}{e^u - 1}. }[/math]
By the quotient rule, we have
- [math]\displaystyle{ \tilde \psi(u) = \frac{t}{8} (u+b) \frac{\exp( \frac{t}{16} ((u+b)^2 -b^2) )}{e^u - 1} - \frac{e^u (\exp( \frac{t}{16} ((u+b)^2 -b^2) )-1)}{(e^u-1)^2} \quad (7.9). }[/math]
This expression is tractable as long as [math]\displaystyle{ e^u-1 }[/math] stays away from zero. Otherwise we can do the following. First observe from the fundamental theorem of calculus that
- [math]\displaystyle{ \exp( \frac{t}{16} ((u+b)^2 -b^2) ) - 1 = u \int_0^1 \frac{t}{8} (\theta u+b) \exp(\frac{t}{16} ((\theta u+b)^2 -b^2) )\ d\theta }[/math]
and
- [math]\displaystyle{ \frac{u}{e^u-1} = \int_0^1 \frac{1}{1 + \sigma(e^u-1)}\ d\sigma }[/math]
and hence
- [math]\displaystyle{ \tilde \psi(u) = \int_0^1 \int_0^1 \frac{d}{du} [ (\theta u+b) \frac{\exp(\frac{t}{16} ((\theta u+b)^2 -b^2) )}{1+\sigma(e^u-1)} ]\ d\sigma d\theta. }[/math]
- [math]\displaystyle{ \tilde \psi(u) = \int_0^1 \int_0^1 \frac{\exp(\frac{t}{16} ((\theta u+b)^2 -b^2) )}{1+\sigma(e^u-1)} [ \theta + \frac{t}{8} (\theta u + b)^2 - (\theta u+b) \frac{\sigma e^u}{1+\sigma(e^u-1)} ]\ d\sigma d\theta. }[/math]