Asymptotics of H t: Difference between revisions

From Polymath Wiki
Jump to navigationJump to search
Line 207: Line 207:
To compute the integral, we make the change of variables <math>w = 2\pi i M + z</math> and note that
To compute the integral, we make the change of variables <math>w = 2\pi i M + z</math> and note that
:<math> w^{s-1} e^{-Nw} = (2\pi i M)^{s-1} \exp( - 2\pi i MN ) \exp( (s-1) \log(1 + \frac{z}{2\pi i M}) - N z)</math>
:<math> w^{s-1} e^{-Nw} = (2\pi i M)^{s-1} \exp( - 2\pi i MN ) \exp( (s-1) \log(1 + \frac{z}{2\pi i M}) - N z)</math>
:<math> \approx (2\pi i M)^{s-1} \exp( - 2\pi i MN ) \exp( - \frac{i}{4\pi} z^2 + \frac{s-2\pi i MN}{2\pi i M} z)</math>
:<math> \approx (2\pi i M)^{s-1} \exp( - 2\pi i MN ) \exp( \frac{i}{4\pi} z^2 + \frac{s-2\pi i MN}{2\pi i M} z)</math>
so we expect
so we expect
:<math> \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \ dw \approx (2\pi i M)^{s-1} \exp( - 2\pi i MN )  \int_C \frac{\exp( -\frac{i}{4\pi} z^2 + \frac{s-2\pi i MN}{2\pi i M} z)}{e^z - 1}\ dz</math>
:<math> \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \ dw \approx (2\pi i M)^{s-1} \exp( - 2\pi i MN )\Phi(\frac{s-2\pi i MN}{2\pi i M})</math>
where
:<math>\Phi(\alpha) := \int_C \frac{\exp( \frac{i}{4\pi} z^2 + \alpha z)}{e^z - 1}\ dz</math>
where <math>C</math> is a contour that goes counter-clockwise around the origin (as well as all the lower imaginary zeroes of <math>e^{z-1}</math>).
where <math>C</math> is a contour that goes counter-clockwise around the origin (as well as all the lower imaginary zeroes of <math>e^{z-1}</math>).
We can calculate the integral explicitly:
'''Lemma 2''' For any complex <math>\alpha</math>, we have <math>\Phi(\alpha) = 2\pi \frac{\cos \pi(\frac{1}{2} \alpha^2 - \alpha - \frac{\pi}{8})}{\cos(\pi \alpha)} \exp( \frac{i \pi}{2} \alpha^2 - \frac{5 \pi}{8} )</math>.
'''Proof''' The integrand has a residue of <math>1</math> at <math>0</math>, hence on shifting the contour downward by <math>2\pi i</math> we have
:<math>\Phi(\alpha) = 2\pi i + \int_C \frac{\exp( \frac{i}{4\pi} (z-2\pi i)^2 + \alpha (z-2\pi i) )}{e^z-1}\ dz.</math>
The right-hand side expands as
:<math>2\pi i - e^{-2\pi i \alpha} \int_C \frac{\exp( \frac{i}{4\pi} z^2 + (\alpha+1) z)}{e^z-1}\ dz</math>
which we can write as
:<math>2\pi i - e^{-2\pi i \alpha} (\Phi(\alpha) + \int_C \exp( \frac{i}{4\pi} z^2 + \alpha z\ dz).</math>
The last integral is a standard gaussian integral, which can be evaluated as <math>\sqrt{\frac{\pi}{i/4\pi}} \exp( \pi i \alpha^2)</math>.  Hence
:<math>\Phi(\alpha) = 2\pi i - e^{-2\pi i \alpha} (\Phi(\alpha) + \sqrt{\frac{\pi}{i/4\pi}} \exp( \pi i \alpha^2)),</math>
and the claim then follows after some algebra. <math>\Box</math>

Revision as of 22:12, 4 February 2018

Asymptotics for [math]\displaystyle{ t=0 }[/math]

The approximate functional equation (see e.g. [T1986, (4.12.4)]) asserts that

[math]\displaystyle{ \displaystyle \zeta(s) = \sum_{n \leq N} \frac{1}{n^s} + \pi^{s-1/2} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \sum_{n \leq N} \frac{1}{n^{1-s}} + O( t^{-\sigma/2} ) }[/math]

for [math]\displaystyle{ s = \sigma +it }[/math] with [math]\displaystyle{ t }[/math] large, [math]\displaystyle{ 0 \lt \sigma \lt 1 }[/math], and [math]\displaystyle{ N := \sqrt{t/2\pi} }[/math]. This implies that

[math]\displaystyle{ \displaystyle \xi(s) = F(s) + F(1-s) + O( \Gamma(\frac{s+4}{2}) t^{-\sigma/2} ) }[/math]

where

[math]\displaystyle{ \displaystyle F(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{1}{n^s}. }[/math]

Writing

[math]\displaystyle{ \displaystyle \frac{s(s-1)}{2} \Gamma(s/2) = 2 \Gamma(\frac{s+4}{2}) - 3 \Gamma(\frac{s+2}{2}) }[/math]

we have [math]\displaystyle{ F(s) = 2 F_0(s) - 3 F_{-1}(s) }[/math], where

[math]\displaystyle{ \displaystyle F_j(s) := \pi^{-s/2} \Gamma(\frac{s+4}{2} + j) \sum_{n=1}^N \frac{1}{n^s}. }[/math]

The [math]\displaystyle{ F_{-1} }[/math] term sums to [math]\displaystyle{ O( \Gamma(\frac{s+4}{2}) t^{-\sigma/2} ) }[/math], hence

[math]\displaystyle{ \displaystyle \xi(s) = 2F_0(s) + 2F_0(1-s) + O( \Gamma(\frac{s+4}{2}) t^{-\sigma/2} ) }[/math]

and thus

[math]\displaystyle{ \displaystyle H(x+iy) = \frac{1}{4} F_0( \frac{1+ix-y}{2} ) + \frac{1}{4} \overline{F_0( \frac{1+ix+y}{2} )} + O( \Gamma(\frac{9+ix+y}{2}) x^{-(1+y)/2} ). }[/math]

One would expect the [math]\displaystyle{ \sum_{n=1}^N \frac{1}{n^s} }[/math] term to remain more or less bounded (this is basically the Lindelof hypothesis), leading to the heuristics

[math]\displaystyle{ \displaystyle |F_0(\frac{1+ix \pm y}{2})| \asymp \Gamma(\frac{9+ix \pm y}{2}). }[/math]

Since [math]\displaystyle{ \Gamma(\frac{9+ix - y}{2}) \approx \Gamma(\frac{9+ix+y}{2}) (ix)^{-y} }[/math], we expect the [math]\displaystyle{ F_0( \frac{1+ix+y}{2} ) }[/math] term to dominate once [math]\displaystyle{ y \gg \frac{1}{\log x} }[/math].

Asymptotics for [math]\displaystyle{ t \gt 0 }[/math]

Let [math]\displaystyle{ z=x+iy }[/math] for large [math]\displaystyle{ x }[/math] and positive bounded [math]\displaystyle{ y }[/math]. We have

[math]\displaystyle{ \displaystyle H_t(z) = \frac{1}{2} \int_{-\infty}^\infty e^{tu^2} \Phi(u) \exp(izu)\ du }[/math]

where

[math]\displaystyle{ \displaystyle \Phi(u) = \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3\pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}). }[/math]

We can shift contours to

[math]\displaystyle{ \displaystyle H_t(z) = \frac{1}{2} \int_{i\theta-\infty}^{i\theta+\infty} e^{tu^2} \Phi(u) \exp(izu)\ du }[/math]

to any [math]\displaystyle{ -\pi/8 \lt \theta \lt \pi/8 }[/math] that we please; it seems that a good choice will be [math]\displaystyle{ \theta = \mathrm{arg} (ix+y+9) \approx \frac{\pi}{8} - \frac{y+9}{x} }[/math]. By symmetry, we thus have

[math]\displaystyle{ \displaystyle H_t(z) = G_t(x+iy) + \overline{G_t(x-iy)} }[/math]

where

[math]\displaystyle{ \displaystyle G_t(z) := \int_{i\theta}^{i\theta+\infty} e^{tu^2} \Phi(u) \exp(izu)\ du. }[/math]

By Fubini's theorem we have

[math]\displaystyle{ \displaystyle G_{t}(x \pm i y) = \sum_{n=1}^\infty \pi^2 n^4 \int_{i\theta}^{i\theta+\infty} \exp( tu^2 - \pi n^2 e^{4u} + (ix \mp y + 9) u)\ du }[/math]
[math]\displaystyle{ \displaystyle - \sum_{n=1}^\infty \frac{3}{2} \pi n^2 \int_{i\theta}^{i\theta+\infty} \exp( tu^2 - \pi n^2 e^{4u} + (ix \mp y + 5) u)\ du. }[/math]

The second terms end up being about [math]\displaystyle{ O(1/x) }[/math] the size of the first terms and we will ignore them for now. Making the change of variables [math]\displaystyle{ u = \frac{1}{4} \log \frac{ix \pm y + 9}{4\pi n^2} + v }[/math], we basically have

[math]\displaystyle{ \displaystyle G_t(x \pm iy) \approx \sum_{n=1}^\infty \pi^2 n^4 (\frac{ix \pm y+9}{4\pi n^2})^{\frac{ix \mp y+9}{4}} \int_{-\frac{1}{4} \log \frac{|ix\pm y+9|}{4\pi n^2}}^\infty \exp( \frac{t}{16} (\log \frac{ix \pm y+9}{4\pi n^2} + v)^2 + (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) )\ dv. }[/math]

The function [math]\displaystyle{ \exp( (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) ) }[/math] decays rapidly away from [math]\displaystyle{ v=0 }[/math]. This suggests firstly that this integral is going to be very small when [math]\displaystyle{ n \gg N := \sqrt{x/4\pi} }[/math] (since the left limit of integration will then be to the right of the origin), so we will assume heuristically that [math]\displaystyle{ n }[/math] is now restricted to the range [math]\displaystyle{ n \leq N }[/math]. Next, we approximate [math]\displaystyle{ \exp( \frac{t}{16} (\log \frac{ix \pm y+9}{4\pi n^2} + v)^2) }[/math] by [math]\displaystyle{ \exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} ) }[/math], and then send the left limit off to infinity to obtain (heuristically)

[math]\displaystyle{ \displaystyle G_t(x \pm iy) \approx \sum_{n \leq N} \pi^2 n^4 (\frac{ix \pm y+9}{4\pi n^2})^{\frac{ix \mp y+9}{4}} \exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} ) \int_{-\infty}^\infty \exp( (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) )\ dv. }[/math]

Making the change of variables [math]\displaystyle{ w := \frac{ix \mp y + 9}{4} e^{4v} }[/math] we see that

[math]\displaystyle{ \int_{-\infty}^\infty \exp( (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) )\ dv = \frac{1}{4} \Gamma(\frac{ix \mp y + 9}{4}) (\frac{4}{ix \mp y + 9})^{\frac{ix \mp y+9}{4}} }[/math]

and thus

[math]\displaystyle{ \displaystyle G_t(x \pm iy) \approx \Gamma(\frac{ix \mp y + 9}{4}) \sum_{n \leq N} \frac{\pi^2}{4} n^4 (\frac{1}{\pi n^2})^{\frac{ix \mp y+9}{4}} \exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} ) }[/math]

which simplifies a bit to

[math]\displaystyle{ \displaystyle G_t(x \pm iy) \approx \frac{1}{4} \pi^{-\frac{ix \mp y + 1}{4}} \Gamma(\frac{ix \mp y + 9}{4}) \sum_{n \leq N} \frac{\exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} )}{n^{\frac{1 \mp y + ix}{2}}} }[/math]

and thus we heuristically have

[math]\displaystyle{ H_t(x+iy) \approx \frac{1}{4} F_t( \frac{1+ix-y}{2} ) + \frac{1}{4} \overline{F_t( \frac{1+ix+y}{2} )} }[/math]

where

[math]\displaystyle{ F_t( s ) := \pi^{-s/2} \Gamma(\frac{s+4}{2}) \sum_{n \leq N} \frac{\exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} )}{n^{s}}. }[/math]

Here we can view [math]\displaystyle{ N }[/math] as a function of [math]\displaystyle{ s }[/math] by the formula [math]\displaystyle{ N = \mathrm{Im}(s)/2\pi }[/math].

To understand these asymptotics better, let us inspect [math]\displaystyle{ H_t(x+iy) }[/math] for [math]\displaystyle{ t\gt 0 }[/math] in the region

[math]\displaystyle{ x+iy = T + \frac{a+ib}{\log T}; \quad t = \frac{\tau}{\log T} }[/math]

with [math]\displaystyle{ T }[/math] large, [math]\displaystyle{ a,b = O(1) }[/math], and [math]\displaystyle{ \tau \gt \frac{1}{2} }[/math]. If [math]\displaystyle{ s = \frac{1+ix-y}{2} }[/math], then we can approximate

[math]\displaystyle{ \pi^{-s/2} \approx \pi^{-\frac{1+iT}{4}} }[/math]
[math]\displaystyle{ \Gamma(\frac{s+4}{2}) \approx \Gamma(\frac{9+iT}{2}) T^{\frac{ia-b}{4 \log T}} = \exp( \frac{ia-b}{4} ) \Gamma(\frac{9+iT}{2}) }[/math]
[math]\displaystyle{ \frac{1}{n^s} \approx \frac{1}{n^{\frac{1+iT}{2}}} }[/math]
[math]\displaystyle{ \exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} ) \approx \exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi} - \frac{t}{4} \log T \log n ) }[/math]
[math]\displaystyle{ \approx \exp( \frac{\tau}{16} \log T + \frac{i \pi \tau}{16} ) \frac{1}{n^{\frac{\tau}{4}}} }[/math]

leading to

[math]\displaystyle{ F_t(\frac{1+ix-y}{2}) \approx \pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \exp( \frac{ia-b}{4} ) \exp( \frac{\tau}{16} \log T + \frac{i \pi \tau}{16} ) \sum_n \frac{1}{n^{\frac{1+iT}{2} + \frac{\tau}{4}}} }[/math]
[math]\displaystyle{ \approx \pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \zeta(\frac{1+iT}{2} + \frac{\tau}{4}) \exp( \frac{ia-b}{4} ). }[/math]

Similarly for [math]\displaystyle{ F_t(\frac{1+ix+y}{2}) }[/math] (replacing [math]\displaystyle{ b }[/math] by [math]\displaystyle{ -b }[/math]). If we make a polar coordinate representation

[math]\displaystyle{ \frac{1}{2} \pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \zeta(\frac{1+iT}{2} + \frac{\tau}{4}) = r_{T,\tau} e^{i \theta_{T,\tau}} }[/math]

one thus has

[math]\displaystyle{ H_t(x+iy) \approx \frac{1}{2} ( r_{T,\tau} e^{i \theta_{T,\tau}} \exp( \frac{ia-b}{4} ) + r_{T,\tau} e^{-i \theta_{T,\tau}} \exp(\frac{-ia+b}{4}) ) }[/math]
[math]\displaystyle{ = r_{T,\tau} \cos( \frac{a+ib}{4} + \theta_{T,\tau} ). }[/math]

Thus locally [math]\displaystyle{ H_t(x+iy) }[/math] behaves like a trigonometric function, with zeroes real and equally spaced with spacing [math]\displaystyle{ 4\pi }[/math] (in [math]\displaystyle{ a }[/math]-coordinates) or [math]\displaystyle{ \frac{4\pi}{\log T} }[/math] (in [math]\displaystyle{ x }[/math] coordinates). Once [math]\displaystyle{ \tau }[/math] becomes large, further increase of [math]\displaystyle{ \tau }[/math] basically only increases [math]\displaystyle{ r_{T,\tau} }[/math] and also shifts [math]\displaystyle{ \theta_{T,\tau} }[/math] at rate [math]\displaystyle{ \pi/16 }[/math], causing the number of zeroes to the left of [math]\displaystyle{ T }[/math] to increase at rate [math]\displaystyle{ 1/4 }[/math] as claimed in [KKL2009].

Riemann-Siegel formula

Proposition 1 (Riemann-Siegel formula) For any natural numbers [math]\displaystyle{ N,M }[/math] and complex number [math]\displaystyle{ s }[/math] that is not an integer, we have

[math]\displaystyle{ \zeta(s) = \sum_{n=1}^N \frac{1}{n^s} + \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \sum_{m=1}^M \frac{1}{m^{1-s}} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw }[/math]

where [math]\displaystyle{ w^{s-1} := \exp((s-1) \log w) }[/math] and we use the branch of the logarithm with imaginary part in [math]\displaystyle{ [0,2\pi) }[/math], and [math]\displaystyle{ C_M }[/math] is any contour from [math]\displaystyle{ +\infty }[/math] to [math]\displaystyle{ +\infty }[/math] going once anticlockwise around the zeroes [math]\displaystyle{ 2\pi i m }[/math] of [math]\displaystyle{ e^w-1 }[/math] with [math]\displaystyle{ |m| \leq M }[/math], but does not go around any other zeroes.

Proof This equation is in [T1986, p. 82], but we give a proof here. The right-hand side is meromorphic in [math]\displaystyle{ s }[/math], so it will suffice to establish that

  1. The right-hand side is independent of [math]\displaystyle{ N }[/math];
  2. The right-hand side is independent of [math]\displaystyle{ M }[/math];
  3. Whenever [math]\displaystyle{ \mathrm{Re}(s)\gt 1 }[/math] and [math]\displaystyle{ s }[/math] is not an integer, the right-hand side converges to [math]\displaystyle{ \zeta(s) }[/math] if [math]\displaystyle{ M=0 }[/math] and [math]\displaystyle{ N \to \infty }[/math].

We begin with the first claim. It suffices to show that the right-hand sides for [math]\displaystyle{ N }[/math] and [math]\displaystyle{ N-1 }[/math] agree for every [math]\displaystyle{ N \gt 1 }[/math]. Subtracting, it suffices to show that

[math]\displaystyle{ 0 = \frac{1}{N^s} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} (e^{-Nw} - e^{-(N-1)w}}{e^w-1}\ dw. }[/math]

The integrand here simplifies to [math]\displaystyle{ - w^{s-1} e^{-Nw} }[/math], which on shrinking [math]\displaystyle{ C_M }[/math] to wrap around the positive real axis becomes [math]\displaystyle{ N^{-s} \Gamma(s) (1 - e^{2\pi i(s-1)}) }[/math]. The claim then follows from the Euler reflection formula [math]\displaystyle{ \Gamma(s) \Gamma(1-s) = \frac{\pi}{\sin(\pi s)} }[/math].

Now we verify the second claim. It suffices to show that the right-hand sides for [math]\displaystyle{ M }[/math] and [math]\displaystyle{ M-1 }[/math] agree for every [math]\displaystyle{ M \gt 1 }[/math]. Subtracting, it suffices to show that

[math]\displaystyle{ 0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \frac{1}{M^{1-s}} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M - C_{M-1}} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw. }[/math]

The contour [math]\displaystyle{ C_M - C_{M-1} }[/math] encloses the simple poles at [math]\displaystyle{ +2\pi i M }[/math] and [math]\displaystyle{ -2\pi i M }[/math], which have residues of [math]\displaystyle{ (2\pi i M)^{s-1} = - i (2\pi M)^{s-1} e^{\pi i s/2} }[/math] and [math]\displaystyle{ (-2\pi i M)^{s-1} = i (2\pi M)^{s-1} e^{3\pi i s/2} }[/math] respectively. So, on canceling the factor of [math]\displaystyle{ M^{s-1} }[/math] it suffices to show that

[math]\displaystyle{ 0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} + e^{-i\pi s} \Gamma(1-s) (2\pi)^{s-1} i (e^{3\pi i s/2} - e^{\pi i s/2}). }[/math]

But this follows from the duplication formula [math]\displaystyle{ \Gamma(1-s) = \frac{\Gamma(\frac{1-s}{2}) \Gamma(1-\frac{s}{2})}{\pi^{1/2} 2^s} }[/math] and the Euler reflection formula [math]\displaystyle{ \Gamma(\frac{s}{2}) \Gamma(1-\frac{s}{2}) = \frac{\pi}{\sin(\pi s/2)} }[/math].

Finally we verify the third claim. Since [math]\displaystyle{ \zeta(s) = \lim_{N \to \infty} \sum_{n=1}^\infty \frac{1}{n^s} }[/math], it suffices to show that

[math]\displaystyle{ \lim_{N \to \infty} \int_{C_0} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw = 0. }[/math]

We take [math]\displaystyle{ C_0 }[/math] to be a contour that traverses a [math]\displaystyle{ 1/N }[/math]-neighbourhood of the real axis. Writing [math]\displaystyle{ C_0 = \frac{1}{N} C'_0 }[/math], with [math]\displaystyle{ C'_0 }[/math] independent of [math]\displaystyle{ N }[/math], we can thus write the left-hand side as

[math]\displaystyle{ \lim_{N \to \infty} N^{-s} \int_{C'_0} \frac{w^{s-1} e^{-w}}{e^{w/N}-1}\ dw, }[/math]

and the claim follows from the dominated convergence theorem. [math]\displaystyle{ \Box }[/math]

Applying the Riemann-Siegel formula to the Riemann xi function [math]\displaystyle{ \xi(s) = \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s) }[/math], we have

[math]\displaystyle{ \xi(s) = F_{0,N}(s) + \overline{F_{0,M}(\overline{1-s})} + R_{0,N,M}(s) }[/math]

where

[math]\displaystyle{ F_{0,N}(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{1}{n^s} }[/math]

and

[math]\displaystyle{ R_{0,N,M}(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw. }[/math]

In preparation for applying the heat equation, we now write [math]\displaystyle{ F_{0,N}(s) }[/math] and [math]\displaystyle{ R_{0,N,M}(s) }[/math] as Fourier-Laplace transforms for [math]\displaystyle{ s }[/math] in the first quadrant. Firstly we have

[math]\displaystyle{ \frac{s(s-1)}{2} \Gamma(s/2) = 2 \Gamma(\frac{s+4}{2}) - 3 \Gamma(\frac{s+2}{2}) }[/math]

so that

[math]\displaystyle{ F_{0,N}(s) = \sum_{n=1}^N 2 \Gamma(\frac{s+4}{2}) (\pi n^2)^{-s/2} - 3 \Gamma(\frac{s+2}{2}) (\pi n^2)^{-s/2}. }[/math]

Since (by a change of variables [math]\displaystyle{ x=e^u }[/math]) we have

[math]\displaystyle{ \Gamma(z) = \int_0^\infty x^{s} e^{-x}\ \frac{dx}{x} = \int_{-\infty}^\infty \exp( su - e^u )\ du \quad (1) }[/math]

we thus have

[math]\displaystyle{ F_{0,N}(s) = \sum_{n=1}^N 2 \int_{-\infty}^\infty \exp( \frac{s+4}{2} u - e^u - \frac{s}{2} \log \pi n^2 )\ du - 3 \int_{-\infty}^\infty \exp( \frac{s+2}{2} u - e^u - \frac{s}{2} \log \pi n^2 )\ du. }[/math]

We can shift the [math]\displaystyle{ u }[/math] contour to a contour [math]\displaystyle{ \Gamma }[/math] that starts at [math]\displaystyle{ i\infty }[/math] and ends at [math]\displaystyle{ +\infty }[/math], staying within a bounded distance of the lower imaginary and right real axes. One should think of the first summand as the main term and the second one as a lower order term (about [math]\displaystyle{ O(1/|s|) }[/math] smaller in practice). In [math]\displaystyle{ s }[/math] coordinates, the backwards heat equation becomes

[math]\displaystyle{ \partial_t F_{0,N}(s) = \frac{1}{4} \partial_{ss} F_{0,N}(s) }[/math]

and so the evolution becomes

[math]\displaystyle{ F_{t,N}(s) = \sum_{n=1}^N 2 \int_{\Gamma} \exp( \frac{s+4}{2} u - e^u - \frac{s}{2} \log \pi n^2 + \frac{t}{16} (u - \log \pi n^2)^2 )\ du - 3 \int_{\Gamma} \exp( \frac{s+2}{2} u - e^u - \frac{s}{2} \log \pi n^2 \frac{t}{16} (u - \log \pi n^2)^2 )\ du. }[/math]

Note that because we shifted contours to [math]\displaystyle{ \Gamma }[/math], the integrand here remains absolutely integrable. If we shift the first variable of integration by [math]\displaystyle{ \log \frac{s+4}{2} }[/math] and the second by [math]\displaystyle{ \log \frac{s+2}{2} }[/math], we obtain

[math]\displaystyle{ F_{t,N}(s) = 2 \sum_{n=1}^N \frac{\exp( \frac{s+4}{2} \log \frac{s+4}{2} - \frac{s+4}{2})}{(\pi n^2)^{s/2}} \int_\Gamma \exp( \frac{s+4}{2} (1 + u - e^u) + \frac{t}{16} (u + \log \frac{s+4}{2\pi n^2})^2 )\ du }[/math]
[math]\displaystyle{ - 3 \sum_{n=1}^N \frac{\exp( \frac{s+2}{2} \log \frac{s+2}{2} - \frac{s+2}{2})}{(\pi n^2)^{s/2}} \int_\Gamma \exp( \frac{s+2}{2} (1 + u - e^u) + \frac{t}{16} (u + \log \frac{s+2}{2\pi n^2})^2 )\ du. }[/math]

Now we manipulate [math]\displaystyle{ R_{0,N,M}(s) }[/math]. Firstly, from the duplication formula [math]\displaystyle{ \Gamma(1-s) = \frac{\Gamma(\frac{1-s}{2}) \Gamma(1-\frac{s}{2})}{\pi^{1/2} 2^s} }[/math] and the Euler reflection formula [math]\displaystyle{ \Gamma(\frac{s}{2}) \Gamma(1-\frac{s}{2}) = \frac{\pi}{\sin(\pi s/2)} }[/math] we have

[math]\displaystyle{ \Gamma(s/2) \Gamma(1-s) = \frac{\pi^{1/2}}{2^s \sin(\pi s/2)} \Gamma(\frac{1-s}{2}) }[/math]

and thus

[math]\displaystyle{ R_{0,N,M}(s) = \frac{s(s-1)}{2} \frac{\pi^{(1-s)/2}}{2^s \sin(\pi s/2)} \Gamma(\frac{1-s}{2}) \frac{e^{-i\pi s}}{2\pi i} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw. }[/math]

Next, we have

[math]\displaystyle{ \frac{s(s-1)}{2} \Gamma(\frac{1-s}{2}) = 2 \Gamma(\frac{5-s}{2}) - 3 \Gamma(\frac{3-s}{2}) }[/math]

and hence by (1)

[math]\displaystyle{ R_{0,N,M}(s) = 2 \frac{\pi^{(1-s)/2}}{2^s \sin(\pi s/2)} \frac{e^{-i\pi s}}{2\pi i} \int_{-\infty}^\infty \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{5-s}{2} u - e^u ) \ dw du }[/math]
[math]\displaystyle{ - 3 \frac{\pi^{(1-s)/2}}{2^s \sin(\pi s/2)} \frac{e^{-i\pi s}}{2\pi i} \int_{-\infty}^\infty \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{3-s}{2} u - e^u ) \ dw du. }[/math]

We can shift the [math]\displaystyle{ u }[/math] contour to a contour [math]\displaystyle{ \overline{\Gamma} }[/math] that starts at [math]\displaystyle{ -i\infty }[/math] and ends at [math]\displaystyle{ +\infty }[/math], staying within a bounded distance of the lower imaginary and right real axes. Next, from the geometric series formula one has

[math]\displaystyle{ \frac{1}{\sin(\pi s/2)} = -2i e^{i\pi s/2} \sum_{n=0}^\infty e^{\pi i n s} }[/math]

and hence

[math]\displaystyle{ R_{0,N,M}(s) = - 2 \frac{\pi^{(-s-1)/2}}{2^s} \sum_{n=0}^\infty e^{\pi i (n-\frac{1}{2})s} \int_{\overline{\Gamma}} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{5-s}{2} u - e^u ) \ dw du }[/math]
[math]\displaystyle{ + 3 \frac{\pi^{(-s-1)/2}}{2^s} \sum_{n=0}^\infty e^{\pi i (n-\frac{1}{2})s} \int_{\overline{\Gamma}} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{3-s}{2} u - e^u ) \ dw du. }[/math]

We can then evolve by the heat flow to obtain

[math]\displaystyle{ R_{t,N,M}(s) = - 2 \frac{\pi^{(-s-1)/2}}{2^s} \sum_{n=0}^\infty e^{\pi i (n-\frac{1}{2})s} \int_\Gamma \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{5-s}{2} u - e^u + \frac{t}{16} (-u + \pi i (2n-1) + \log \frac{w}{4\pi})^2 ) \ dw du }[/math]
[math]\displaystyle{ + 3 \frac{\pi^{(-s-1)/2}}{2^s} \sum_{n=0}^\infty e^{\pi i (n-\frac{1}{2})s} \int_{\overline{\Gamma}} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{3-s}{2} u - e^u + \frac{t}{16} (-u + \pi i (2n-1) + \log \frac{w}{4\pi})^2 ) ) \ dw du, }[/math]

noting that the integrals here are absolutely convergent for [math]\displaystyle{ t \gt 0 }[/math]. We now have the exact formula

[math]\displaystyle{ H_t(x+iy) = \frac{1}{8} ( F_{t,N}(\frac{1+ix+y}{2}) + \overline{F_{t,M}(\frac{1+ix-y}{2})} + R_{t,N,M}( \frac{1+ix+y}{2} ) ). }[/math]

A good choice for [math]\displaystyle{ N,M }[/math] appears to be [math]\displaystyle{ N=M=\lfloor \sqrt{x/4\pi}\rfloor }[/math].

Now for asymptotics. For [math]\displaystyle{ \lambda }[/math] in the first quadrant, one has

[math]\displaystyle{ \int_\Gamma \exp( \lambda( 1+u-e^u) )\ du = \exp( \lambda - \lambda \log \lambda ) \Gamma(\lambda) }[/math]

and then on integrating by parts

[math]\displaystyle{ \int_\Gamma \exp( \lambda( 1+u-e^u) ) (1 - e^u)\ du = 0 }[/math]

and

[math]\displaystyle{ \int_\Gamma \exp( \lambda( 1+u-e^u) ) u (1 - e^u)\ du = -\frac{1}{\lambda} \exp( \lambda - \lambda \log \lambda ) \Gamma(\lambda). }[/math]

This suggests the stationary phase asymptotic

[math]\displaystyle{ \int_\Gamma \exp( \lambda( 1+u-e^u) ) f(u) = \exp( \lambda - \lambda \log \lambda ) \Gamma(\lambda) (f(0) + \frac{f''(0) - f'(0)}{2 \lambda} + O( \|f\|/\lambda^2) ) }[/math]

for reasonable functions [math]\displaystyle{ f }[/math], with a reasonable norm [math]\displaystyle{ \|f\| }[/math]. This suggests

[math]\displaystyle{ F_{t,N}(s) \approx 2 \sum_{n=1}^N \frac{\Gamma( \frac{s+4}{2} )}{(\pi n^2)^{s/2}} \exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} ) (1 + (\frac{t}{16} + \frac{t^2}{128} \log^2 \frac{s+4}{2\pi n^2} - \frac{t}{16} \log \frac{s+4}{2\pi n^2} ) \frac{2}{s+4} ) }[/math]
[math]\displaystyle{ - 3 \sum_{n=1}^N \frac{\Gamma( \frac{s+2}{2} )}{(\pi n^2)^{s/2}} \exp( \frac{t}{16} \log^2 \frac{s+2}{2\pi n^2} ). }[/math]

Now we heuristically estimate [math]\displaystyle{ R_{t,N,M}(s) }[/math] term to top order. We discard the second term as being lower order. The critical point for [math]\displaystyle{ u }[/math] is [math]\displaystyle{ u = \log \frac{5-s}{2} }[/math], so we approximate [math]\displaystyle{ \frac{t}{16} (-u + \pi i (2n-1) + \log \frac{w}{4\pi})^2 ) }[/math] by [math]\displaystyle{ \frac{t}{16} (\pi i (2n-1) + \log \frac{w}{2\pi(5-s)})^2 ) }[/math] and arrive at

[math]\displaystyle{ R_{t,N,M}(s) \approx - 2 \Gamma(\frac{5-s}{2}) \frac{\pi^{(-s-1)/2}}{2^s} \sum_{n=0}^\infty e^{\pi i (n-\frac{1}{2})s} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{t}{16} (\pi i (2n-1) + \log \frac{w}{2\pi(5-s)})^2 ) \ dw. }[/math]

The critical point of [math]\displaystyle{ w^s e^{-Nw} }[/math] occurs at [math]\displaystyle{ w = \frac{s}{N} }[/math]. Approximating [math]\displaystyle{ \log \frac{w}{2\pi(5-s)} }[/math] by [math]\displaystyle{ \log \frac{s}{2\pi N (5-s)} }[/math], we thus have

[math]\displaystyle{ R_{t,N,M}(s) \approx - 2 \Gamma(\frac{5-s}{2}) \frac{\pi^{(-s-1)/2}}{2^s} \sum_{n=0}^\infty e^{\pi i (n-\frac{1}{2})s} \exp( \frac{t}{16} (\pi i (2n-1) + \log \frac{s}{2\pi N(5-s)})^2 ) \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \ dw. }[/math]

The [math]\displaystyle{ n=0 }[/math] term should dominate quite strongly, thus

[math]\displaystyle{ R_{t,N,M}(s) \approx - 2 \Gamma(\frac{5-s}{2}) \frac{\pi^{(-s-1)/2}}{2^s} e^{-\pi i s/2} \exp( \frac{t}{16} (-\pi i + \log \frac{s}{2\pi N(5-s)})^2 ) \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \ dw. }[/math]

To compute the integral, we make the change of variables [math]\displaystyle{ w = 2\pi i M + z }[/math] and note that

[math]\displaystyle{ w^{s-1} e^{-Nw} = (2\pi i M)^{s-1} \exp( - 2\pi i MN ) \exp( (s-1) \log(1 + \frac{z}{2\pi i M}) - N z) }[/math]
[math]\displaystyle{ \approx (2\pi i M)^{s-1} \exp( - 2\pi i MN ) \exp( \frac{i}{4\pi} z^2 + \frac{s-2\pi i MN}{2\pi i M} z) }[/math]

so we expect

[math]\displaystyle{ \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \ dw \approx (2\pi i M)^{s-1} \exp( - 2\pi i MN )\Phi(\frac{s-2\pi i MN}{2\pi i M}) }[/math]

where

[math]\displaystyle{ \Phi(\alpha) := \int_C \frac{\exp( \frac{i}{4\pi} z^2 + \alpha z)}{e^z - 1}\ dz }[/math]

where [math]\displaystyle{ C }[/math] is a contour that goes counter-clockwise around the origin (as well as all the lower imaginary zeroes of [math]\displaystyle{ e^{z-1} }[/math]).

We can calculate the integral explicitly:

Lemma 2 For any complex [math]\displaystyle{ \alpha }[/math], we have [math]\displaystyle{ \Phi(\alpha) = 2\pi \frac{\cos \pi(\frac{1}{2} \alpha^2 - \alpha - \frac{\pi}{8})}{\cos(\pi \alpha)} \exp( \frac{i \pi}{2} \alpha^2 - \frac{5 \pi}{8} ) }[/math].

Proof The integrand has a residue of [math]\displaystyle{ 1 }[/math] at [math]\displaystyle{ 0 }[/math], hence on shifting the contour downward by [math]\displaystyle{ 2\pi i }[/math] we have

[math]\displaystyle{ \Phi(\alpha) = 2\pi i + \int_C \frac{\exp( \frac{i}{4\pi} (z-2\pi i)^2 + \alpha (z-2\pi i) )}{e^z-1}\ dz. }[/math]

The right-hand side expands as

[math]\displaystyle{ 2\pi i - e^{-2\pi i \alpha} \int_C \frac{\exp( \frac{i}{4\pi} z^2 + (\alpha+1) z)}{e^z-1}\ dz }[/math]

which we can write as

[math]\displaystyle{ 2\pi i - e^{-2\pi i \alpha} (\Phi(\alpha) + \int_C \exp( \frac{i}{4\pi} z^2 + \alpha z\ dz). }[/math]

The last integral is a standard gaussian integral, which can be evaluated as [math]\displaystyle{ \sqrt{\frac{\pi}{i/4\pi}} \exp( \pi i \alpha^2) }[/math]. Hence

[math]\displaystyle{ \Phi(\alpha) = 2\pi i - e^{-2\pi i \alpha} (\Phi(\alpha) + \sqrt{\frac{\pi}{i/4\pi}} \exp( \pi i \alpha^2)), }[/math]

and the claim then follows after some algebra. [math]\displaystyle{ \Box }[/math]