Asymptotics of H t: Difference between revisions
No edit summary |
|||
Line 109: | Line 109: | ||
Thus locally <math>H_t(x+iy)</math> behaves like a trigonometric function, with zeroes real and equally spaced with spacing <math>4\pi</math> (in <math>a</math>-coordinates) or <math>\frac{4\pi}{\log T}</math> (in <math>x</math> coordinates). Once <math>\tau</math> becomes large, further increase of <math>\tau</math> basically only increases <math>r_{T,\tau}</math> and also shifts <math>\theta_{T,\tau}</math> at rate <math>\pi/16</math>, causing the number of zeroes to the left of <math>T</math> to increase at rate <math>1/4</math> as claimed in [KKL2009]. | Thus locally <math>H_t(x+iy)</math> behaves like a trigonometric function, with zeroes real and equally spaced with spacing <math>4\pi</math> (in <math>a</math>-coordinates) or <math>\frac{4\pi}{\log T}</math> (in <math>x</math> coordinates). Once <math>\tau</math> becomes large, further increase of <math>\tau</math> basically only increases <math>r_{T,\tau}</math> and also shifts <math>\theta_{T,\tau}</math> at rate <math>\pi/16</math>, causing the number of zeroes to the left of <math>T</math> to increase at rate <math>1/4</math> as claimed in [KKL2009]. | ||
=== Riemann-Siegel formula === | |||
'''Proposition 1''' (Riemann-Siegel formula) For any natural numbers <math>N,M</math> and complex number <math>s</math> that is not an integer, we have | |||
:<math>\zeta(s) = \sum_{n=1}^N \frac{1}{n^s} + \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \sum_{m=1}^M \frac{1}{m^{1-s}} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw</math> | |||
where <math>w^{s-1} := \exp((s-1) \log w)</math> and we use the branch of the logarithm with imaginary part in <math>[0,2\pi)</math>, and <math>C_M</math> is any contour from <math>+\infty</math> to <math>+\infty</math> going once anticlockwise around the zeroes <math>2\pi i m</math> of <math>e^w-1</math> with <math>|m| \leq M</math>, but does not go around any other zeroes. | |||
'''Proof''' This equation is in [T1986, p. 82], but we give a proof here. The right-hand side is meromorphic in <math>s</math>, so it will suffice to establish that | |||
# The right-hand side is independent of <math>N</math>; | |||
# The right-hand side is independent of <math>M</math>; | |||
# Whenever <math>\mathrm{Re}(s)>1</math> and <math>s</math> is not an integer, the right-hand side converges to <math>\zeta(s)</math> if <math>M=0</math> and <math>N \to \infty</math>. | |||
We begin with the first claim. It suffices to show that the right-hand sides for <math>N</math> and <math>N-1</math> agree for every <math>N > 1</math>. Subtracting, it suffices to show that | |||
:<math>0 = \frac{1}{N^s} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} (e^{-Nw} - e^{-(N-1)w}}{e^w-1}\ dw.</math> | |||
The integrand here simplifies to <math>- w^{s-1} e^{-Nw}</math>, which on shrinking <math>C_M</math> to wrap around the positive real axis becomes <math>N^{-s} \Gamma(s) (1 - e^{2\pi i(s-1)})</math>. The claim then follows from the Euler reflection formula <math>\Gamma(s) \Gamma(1-s) = \frac{\pi}{\sin(\pi s)}</math>. | |||
Now we verify the second claim. It suffices to show that the right-hand sides for <math>M</math> and <math>M-1</math> agree for every <math>M > 1</math>. Subtracting, it suffices to show that | |||
:<math>0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \frac{1}{M^{1-s}} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M - C_{M-1}} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw.</math> | |||
The contour <math>C_M - C_{M-1}</math> encloses the simple poles at <math>+2\pi i M</math> and <math>-2\pi i M</math>, which have residues of <math>(2\pi i M)^{s-1} = - i (2\pi M)^{s-1} e^{\pi i s/2}</math> and <math>(-2\pi i M)^{s-1} = i (2\pi M)^{s-1} e^{3\pi i s/2}</math> respectively. So, on canceling the factor of </math>M^{s-1}</math> it suffices to show that | |||
:<math>0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} + e^{-i\pi s} \Gamma(1-s) (2\pi)^{s-1} i (e^{3\pi i s/2} - e^{\pi i s/2}).</math> | |||
But this follows from the duplication formula <math>\Gamma(1-s) = \frac{\Gamma(\frac{1-s}{2}) \Gamma(1-\frac{s}{2})}{\pi^{1/2} 2^s}</math> and the Euler reflection formula <math>\Gamma(\frac{s}{2}) \Gamma(1-\frac{s}{2}) = \frac{\pi}{\sin(\pi s/2)}</math>. | |||
Finally we verify the third claim. Since <math>\zeta(s) = \lim_{N \to \infty} \sum_{n=1}^\infty \frac{1}{n^s}</math>, it suffices to show that | |||
:<math>\lim_{N \to \infty} \int_{C_0} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw = 0.</math> | |||
We take <math>C_0</math> to be a contour that traverses a <math>1/N</math>-neighbourhood of the real axis. Writing <math>C_0 = \frac{1}{N} C'_0</math>, with <math>C'_0</math> independent of <math>N</math>, we can thus write the left-hand side as | |||
:<math>\lim_{N \to \infty} N^{-s} \int_{C'_0} \frac{w^{s-1} e^{-w}}{e^{w/N}-1}\ dw,</math> | |||
and the claim follows from the dominated convergence theorem. <math>\Box</math> |
Revision as of 10:35, 4 February 2018
Asymptotics for [math]\displaystyle{ t=0 }[/math]
The approximate functional equation (see e.g. [T1986, (4.12.4)]) asserts that
- [math]\displaystyle{ \displaystyle \zeta(s) = \sum_{n \leq N} \frac{1}{n^s} + \pi^{s-1/2} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \sum_{n \leq N} \frac{1}{n^{1-s}} + O( t^{-\sigma/2} ) }[/math]
for [math]\displaystyle{ s = \sigma +it }[/math] with [math]\displaystyle{ t }[/math] large, [math]\displaystyle{ 0 \lt \sigma \lt 1 }[/math], and [math]\displaystyle{ N := \sqrt{t/2\pi} }[/math]. This implies that
- [math]\displaystyle{ \displaystyle \xi(s) = F(s) + F(1-s) + O( \Gamma(\frac{s+4}{2}) t^{-\sigma/2} ) }[/math]
where
- [math]\displaystyle{ \displaystyle F(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{1}{n^s}. }[/math]
Writing
- [math]\displaystyle{ \displaystyle \frac{s(s-1)}{2} \Gamma(s/2) = 2 \Gamma(\frac{s+4}{2}) - 3 \Gamma(\frac{s+2}{2}) }[/math]
we have [math]\displaystyle{ F(s) = 2 F_0(s) - 3 F_{-1}(s) }[/math], where
- [math]\displaystyle{ \displaystyle F_j(s) := \pi^{-s/2} \Gamma(\frac{s+4}{2} + j) \sum_{n=1}^N \frac{1}{n^s}. }[/math]
The [math]\displaystyle{ F_{-1} }[/math] term sums to [math]\displaystyle{ O( \Gamma(\frac{s+4}{2}) t^{-\sigma/2} ) }[/math], hence
- [math]\displaystyle{ \displaystyle \xi(s) = 2F_0(s) + 2F_0(1-s) + O( \Gamma(\frac{s+4}{2}) t^{-\sigma/2} ) }[/math]
and thus
- [math]\displaystyle{ \displaystyle H(x+iy) = \frac{1}{4} F_0( \frac{1+ix-y}{2} ) + \frac{1}{4} \overline{F_0( \frac{1+ix+y}{2} )} + O( \Gamma(\frac{9+ix+y}{2}) x^{-(1+y)/2} ). }[/math]
One would expect the [math]\displaystyle{ \sum_{n=1}^N \frac{1}{n^s} }[/math] term to remain more or less bounded (this is basically the Lindelof hypothesis), leading to the heuristics
- [math]\displaystyle{ \displaystyle |F_0(\frac{1+ix \pm y}{2})| \asymp \Gamma(\frac{9+ix \pm y}{2}). }[/math]
Since [math]\displaystyle{ \Gamma(\frac{9+ix - y}{2}) \approx \Gamma(\frac{9+ix+y}{2}) (ix)^{-y} }[/math], we expect the [math]\displaystyle{ F_0( \frac{1+ix+y}{2} ) }[/math] term to dominate once [math]\displaystyle{ y \gg \frac{1}{\log x} }[/math].
Asymptotics for [math]\displaystyle{ t \gt 0 }[/math]
Let [math]\displaystyle{ z=x+iy }[/math] for large [math]\displaystyle{ x }[/math] and positive bounded [math]\displaystyle{ y }[/math]. We have
- [math]\displaystyle{ \displaystyle H_t(z) = \frac{1}{2} \int_{-\infty}^\infty e^{tu^2} \Phi(u) \exp(izu)\ du }[/math]
where
- [math]\displaystyle{ \displaystyle \Phi(u) = \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3\pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}). }[/math]
We can shift contours to
- [math]\displaystyle{ \displaystyle H_t(z) = \frac{1}{2} \int_{i\theta-\infty}^{i\theta+\infty} e^{tu^2} \Phi(u) \exp(izu)\ du }[/math]
to any [math]\displaystyle{ -\pi/8 \lt \theta \lt \pi/8 }[/math] that we please; it seems that a good choice will be [math]\displaystyle{ \theta = \mathrm{arg} (ix+y+9) \approx \frac{\pi}{8} - \frac{y+9}{x} }[/math]. By symmetry, we thus have
- [math]\displaystyle{ \displaystyle H_t(z) = G_t(x+iy) + \overline{G_t(x-iy)} }[/math]
where
- [math]\displaystyle{ \displaystyle G_t(z) := \int_{i\theta}^{i\theta+\infty} e^{tu^2} \Phi(u) \exp(izu)\ du. }[/math]
By Fubini's theorem we have
- [math]\displaystyle{ \displaystyle G_{t}(x \pm i y) = \sum_{n=1}^\infty \pi^2 n^4 \int_{i\theta}^{i\theta+\infty} \exp( tu^2 - \pi n^2 e^{4u} + (ix \mp y + 9) u)\ du }[/math]
- [math]\displaystyle{ \displaystyle - \sum_{n=1}^\infty \frac{3}{2} \pi n^2 \int_{i\theta}^{i\theta+\infty} \exp( tu^2 - \pi n^2 e^{4u} + (ix \mp y + 5) u)\ du. }[/math]
The second terms end up being about [math]\displaystyle{ O(1/x) }[/math] the size of the first terms and we will ignore them for now. Making the change of variables [math]\displaystyle{ u = \frac{1}{4} \log \frac{ix \pm y + 9}{4\pi n^2} + v }[/math], we basically have
- [math]\displaystyle{ \displaystyle G_t(x \pm iy) \approx \sum_{n=1}^\infty \pi^2 n^4 (\frac{ix \pm y+9}{4\pi n^2})^{\frac{ix \mp y+9}{4}} \int_{-\frac{1}{4} \log \frac{|ix\pm y+9|}{4\pi n^2}}^\infty \exp( \frac{t}{16} (\log \frac{ix \pm y+9}{4\pi n^2} + v)^2 + (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) )\ dv. }[/math]
The function [math]\displaystyle{ \exp( (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) ) }[/math] decays rapidly away from [math]\displaystyle{ v=0 }[/math]. This suggests firstly that this integral is going to be very small when [math]\displaystyle{ n \gg N := \sqrt{x/4\pi} }[/math] (since the left limit of integration will then be to the right of the origin), so we will assume heuristically that [math]\displaystyle{ n }[/math] is now restricted to the range [math]\displaystyle{ n \leq N }[/math]. Next, we approximate [math]\displaystyle{ \exp( \frac{t}{16} (\log \frac{ix \pm y+9}{4\pi n^2} + v)^2) }[/math] by [math]\displaystyle{ \exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} ) }[/math], and then send the left limit off to infinity to obtain (heuristically)
- [math]\displaystyle{ \displaystyle G_t(x \pm iy) \approx \sum_{n \leq N} \pi^2 n^4 (\frac{ix \pm y+9}{4\pi n^2})^{\frac{ix \mp y+9}{4}} \exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} ) \int_{-\infty}^\infty \exp( (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) )\ dv. }[/math]
Making the change of variables [math]\displaystyle{ w := \frac{ix \mp y + 9}{4} e^{4v} }[/math] we see that
- [math]\displaystyle{ \int_{-\infty}^\infty \exp( (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) )\ dv = \frac{1}{4} \Gamma(\frac{ix \mp y + 9}{4}) (\frac{4}{ix \mp y + 9})^{\frac{ix \mp y+9}{4}} }[/math]
and thus
- [math]\displaystyle{ \displaystyle G_t(x \pm iy) \approx \Gamma(\frac{ix \mp y + 9}{4}) \sum_{n \leq N} \frac{\pi^2}{4} n^4 (\frac{1}{\pi n^2})^{\frac{ix \mp y+9}{4}} \exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} ) }[/math]
which simplifies a bit to
- [math]\displaystyle{ \displaystyle G_t(x \pm iy) \approx \frac{1}{4} \pi^{-\frac{ix \mp y + 1}{4}} \Gamma(\frac{ix \mp y + 9}{4}) \sum_{n \leq N} \frac{\exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} )}{n^{\frac{1 \mp y + ix}{2}}} }[/math]
and thus we heuristically have
- [math]\displaystyle{ H_t(x+iy) \approx \frac{1}{4} F_t( \frac{1+ix-y}{2} ) + \frac{1}{4} \overline{F_t( \frac{1+ix+y}{2} )} }[/math]
where
- [math]\displaystyle{ F_t( s ) := \pi^{-s/2} \Gamma(\frac{s+4}{2}) \sum_{n \leq N} \frac{\exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} )}{n^{s}}. }[/math]
Here we can view [math]\displaystyle{ N }[/math] as a function of [math]\displaystyle{ s }[/math] by the formula [math]\displaystyle{ N = \mathrm{Im}(s)/2\pi }[/math].
To understand these asymptotics better, let us inspect [math]\displaystyle{ H_t(x+iy) }[/math] for [math]\displaystyle{ t\gt 0 }[/math] in the region
- [math]\displaystyle{ x+iy = T + \frac{a+ib}{\log T}; \quad t = \frac{\tau}{\log T} }[/math]
with [math]\displaystyle{ T }[/math] large, [math]\displaystyle{ a,b = O(1) }[/math], and [math]\displaystyle{ \tau \gt \frac{1}{2} }[/math]. If [math]\displaystyle{ s = \frac{1+ix-y}{2} }[/math], then we can approximate
- [math]\displaystyle{ \pi^{-s/2} \approx \pi^{-\frac{1+iT}{4}} }[/math]
- [math]\displaystyle{ \Gamma(\frac{s+4}{2}) \approx \Gamma(\frac{9+iT}{2}) T^{\frac{ia-b}{4 \log T}} = \exp( \frac{ia-b}{4} ) \Gamma(\frac{9+iT}{2}) }[/math]
- [math]\displaystyle{ \frac{1}{n^s} \approx \frac{1}{n^{\frac{1+iT}{2}}} }[/math]
- [math]\displaystyle{ \exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} ) \approx \exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi} - \frac{t}{4} \log T \log n ) }[/math]
- [math]\displaystyle{ \approx \exp( \frac{\tau}{16} \log T + \frac{i \pi \tau}{16} ) \frac{1}{n^{\frac{\tau}{4}}} }[/math]
leading to
- [math]\displaystyle{ F_t(\frac{1+ix-y}{2}) \approx \pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \exp( \frac{ia-b}{4} ) \exp( \frac{\tau}{16} \log T + \frac{i \pi \tau}{16} ) \sum_n \frac{1}{n^{\frac{1+iT}{2} + \frac{\tau}{4}}} }[/math]
- [math]\displaystyle{ \approx \pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \zeta(\frac{1+iT}{2} + \frac{\tau}{4}) \exp( \frac{ia-b}{4} ). }[/math]
Similarly for [math]\displaystyle{ F_t(\frac{1+ix+y}{2}) }[/math] (replacing [math]\displaystyle{ b }[/math] by [math]\displaystyle{ -b }[/math]). If we make a polar coordinate representation
- [math]\displaystyle{ \frac{1}{2} \pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \zeta(\frac{1+iT}{2} + \frac{\tau}{4}) = r_{T,\tau} e^{i \theta_{T,\tau}} }[/math]
one thus has
- [math]\displaystyle{ H_t(x+iy) \approx \frac{1}{2} ( r_{T,\tau} e^{i \theta_{T,\tau}} \exp( \frac{ia-b}{4} ) + r_{T,\tau} e^{-i \theta_{T,\tau}} \exp(\frac{-ia+b}{4}) ) }[/math]
- [math]\displaystyle{ = r_{T,\tau} \cos( \frac{a+ib}{4} + \theta_{T,\tau} ). }[/math]
Thus locally [math]\displaystyle{ H_t(x+iy) }[/math] behaves like a trigonometric function, with zeroes real and equally spaced with spacing [math]\displaystyle{ 4\pi }[/math] (in [math]\displaystyle{ a }[/math]-coordinates) or [math]\displaystyle{ \frac{4\pi}{\log T} }[/math] (in [math]\displaystyle{ x }[/math] coordinates). Once [math]\displaystyle{ \tau }[/math] becomes large, further increase of [math]\displaystyle{ \tau }[/math] basically only increases [math]\displaystyle{ r_{T,\tau} }[/math] and also shifts [math]\displaystyle{ \theta_{T,\tau} }[/math] at rate [math]\displaystyle{ \pi/16 }[/math], causing the number of zeroes to the left of [math]\displaystyle{ T }[/math] to increase at rate [math]\displaystyle{ 1/4 }[/math] as claimed in [KKL2009].
Riemann-Siegel formula
Proposition 1 (Riemann-Siegel formula) For any natural numbers [math]\displaystyle{ N,M }[/math] and complex number [math]\displaystyle{ s }[/math] that is not an integer, we have
- [math]\displaystyle{ \zeta(s) = \sum_{n=1}^N \frac{1}{n^s} + \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \sum_{m=1}^M \frac{1}{m^{1-s}} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw }[/math]
where [math]\displaystyle{ w^{s-1} := \exp((s-1) \log w) }[/math] and we use the branch of the logarithm with imaginary part in [math]\displaystyle{ [0,2\pi) }[/math], and [math]\displaystyle{ C_M }[/math] is any contour from [math]\displaystyle{ +\infty }[/math] to [math]\displaystyle{ +\infty }[/math] going once anticlockwise around the zeroes [math]\displaystyle{ 2\pi i m }[/math] of [math]\displaystyle{ e^w-1 }[/math] with [math]\displaystyle{ |m| \leq M }[/math], but does not go around any other zeroes.
Proof This equation is in [T1986, p. 82], but we give a proof here. The right-hand side is meromorphic in [math]\displaystyle{ s }[/math], so it will suffice to establish that
- The right-hand side is independent of [math]\displaystyle{ N }[/math];
- The right-hand side is independent of [math]\displaystyle{ M }[/math];
- Whenever [math]\displaystyle{ \mathrm{Re}(s)\gt 1 }[/math] and [math]\displaystyle{ s }[/math] is not an integer, the right-hand side converges to [math]\displaystyle{ \zeta(s) }[/math] if [math]\displaystyle{ M=0 }[/math] and [math]\displaystyle{ N \to \infty }[/math].
We begin with the first claim. It suffices to show that the right-hand sides for [math]\displaystyle{ N }[/math] and [math]\displaystyle{ N-1 }[/math] agree for every [math]\displaystyle{ N \gt 1 }[/math]. Subtracting, it suffices to show that
- [math]\displaystyle{ 0 = \frac{1}{N^s} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} (e^{-Nw} - e^{-(N-1)w}}{e^w-1}\ dw. }[/math]
The integrand here simplifies to [math]\displaystyle{ - w^{s-1} e^{-Nw} }[/math], which on shrinking [math]\displaystyle{ C_M }[/math] to wrap around the positive real axis becomes [math]\displaystyle{ N^{-s} \Gamma(s) (1 - e^{2\pi i(s-1)}) }[/math]. The claim then follows from the Euler reflection formula [math]\displaystyle{ \Gamma(s) \Gamma(1-s) = \frac{\pi}{\sin(\pi s)} }[/math].
Now we verify the second claim. It suffices to show that the right-hand sides for [math]\displaystyle{ M }[/math] and [math]\displaystyle{ M-1 }[/math] agree for every [math]\displaystyle{ M \gt 1 }[/math]. Subtracting, it suffices to show that
- [math]\displaystyle{ 0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \frac{1}{M^{1-s}} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M - C_{M-1}} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw. }[/math]
The contour [math]\displaystyle{ C_M - C_{M-1} }[/math] encloses the simple poles at [math]\displaystyle{ +2\pi i M }[/math] and [math]\displaystyle{ -2\pi i M }[/math], which have residues of [math]\displaystyle{ (2\pi i M)^{s-1} = - i (2\pi M)^{s-1} e^{\pi i s/2} }[/math] and [math]\displaystyle{ (-2\pi i M)^{s-1} = i (2\pi M)^{s-1} e^{3\pi i s/2} }[/math] respectively. So, on canceling the factor of </math>M^{s-1}</math> it suffices to show that
- [math]\displaystyle{ 0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} + e^{-i\pi s} \Gamma(1-s) (2\pi)^{s-1} i (e^{3\pi i s/2} - e^{\pi i s/2}). }[/math]
But this follows from the duplication formula [math]\displaystyle{ \Gamma(1-s) = \frac{\Gamma(\frac{1-s}{2}) \Gamma(1-\frac{s}{2})}{\pi^{1/2} 2^s} }[/math] and the Euler reflection formula [math]\displaystyle{ \Gamma(\frac{s}{2}) \Gamma(1-\frac{s}{2}) = \frac{\pi}{\sin(\pi s/2)} }[/math].
Finally we verify the third claim. Since [math]\displaystyle{ \zeta(s) = \lim_{N \to \infty} \sum_{n=1}^\infty \frac{1}{n^s} }[/math], it suffices to show that
- [math]\displaystyle{ \lim_{N \to \infty} \int_{C_0} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw = 0. }[/math]
We take [math]\displaystyle{ C_0 }[/math] to be a contour that traverses a [math]\displaystyle{ 1/N }[/math]-neighbourhood of the real axis. Writing [math]\displaystyle{ C_0 = \frac{1}{N} C'_0 }[/math], with [math]\displaystyle{ C'_0 }[/math] independent of [math]\displaystyle{ N }[/math], we can thus write the left-hand side as
- [math]\displaystyle{ \lim_{N \to \infty} N^{-s} \int_{C'_0} \frac{w^{s-1} e^{-w}}{e^{w/N}-1}\ dw, }[/math]
and the claim follows from the dominated convergence theorem. [math]\displaystyle{ \Box }[/math]