Estimating a sum: Difference between revisions
No edit summary |
|||
(2 intermediate revisions by the same user not shown) | |||
Line 31: | Line 31: | ||
:<math>S_{0.3, 0.4}(N) \leq S_{0.3, 0.4}(2000) + 0.253 \leq 3.469</math> | :<math>S_{0.3, 0.4}(N) \leq S_{0.3, 0.4}(2000) + 0.253 \leq 3.469</math> | ||
for all <math>N \geq 2000</math>. | for all <math>N \geq 2000</math>. | ||
=== Evaluating many sums === | |||
Let <math>N</math> be a natural number. For any complex numbers <math>z,w</math>, we define the quantity | |||
:<math>F_N(z,w) := \sum_{n=1}^N n^{-z + w \log n}.</math> | |||
In particular, the function <math>f_t(x+iy)</math> in the writeup has the form | |||
:<math>f_t(x+iy) = F_N( \frac{1+y-ix}{2} + \frac{t}{2} \alpha(\frac{1+y-ix}{2}), \frac{t}{4} ) + \gamma \overline{F_N( \frac{1-y-ix}{2} + \frac{t}{2} \alpha(\frac{1-y-ix}{2}), \frac{t}{4} )}</math> | |||
It is thus of interest to efficiently evaluate <math>F_N(z,w)</math> for multiple different values of <math>z,w</math>. We will be particularly interested in the regime where <math>w=O(1)</math> and <math>z = z_0 + \zeta</math> for some <math>\zeta = O(1)</math> and some fixed <math>z_0</math> (e.g. <math>z_0 = \frac{1-iX}{2}</math>, thus | |||
:<math>F_N(z_0+\zeta, w) = \sum_{n=1}^N n^{-z_0} n^{-\zeta + w \log n}.</math> | |||
If one were to naively evaluate <math>F_N(-\frac{iX}{2}+\zeta, w)</math> at <math>M</math> different values of <math>(z,w)</math>, this would require about <math>O(NM)</math> operations. We now seek to use fewer operations to perform these evaluations, at the cost of some accuracy. | |||
We can partition the interval <math>\{1,\dots,N\}</math> into an initial segment <math>\{1,\dots,N_0\}</math> and about <math>O(N/H)</math> segments of the form <math>\{ N_i - H/2, \dots, N_i + H/2\}</math> for various <math>N_i</math>. This lets us split | |||
:<math>F_N(z_0+\zeta, w) = F_{N_0}(-\frac{iX}{2}+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} | |||
(N_i + h)^{-z_0} \exp( - \zeta \log(N_i+h) + w \log^2(N_i+h) ).</math> | |||
Writing <math>\log(N_i + h) = \log(N_i) + \varepsilon_{i,h}</math>, where <math>\varepsilon_{i,h} := \log(1 + \frac{h}{N_i})</math>, we thus have | |||
:<math>F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} | |||
(N_i + h)^{-z_0} \exp( A_i(\zeta,w) + B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2 )</math> | |||
where | |||
:<math> A_i(\zeta,w) := - \zeta \log(N_i) + w \log^2(N_i)</math> | |||
and | |||
:<math> B_i(\zeta,w) := - \zeta + 2 w \log N_i.</math> | |||
From Taylor's theorem with remainder, we have | |||
:<math> \exp( a ) = \sum_{j=0}^T \frac{a^j}{j!} + O_{\leq}( \frac{|a|^{T+1}}{(T+1)!} \exp(|a|) )</math> | |||
and hence | |||
:<math>F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} | |||
(N_i + h)^{-z_0} \exp( A_i(\zeta,w) ) \sum_{j=0}^T \frac{1}{j!} (B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2)^j | |||
+ O_{\leq}( E )</math> | |||
where <math>E</math> is the error term | |||
:<math> \sum_i \sum_{-H/2 \leq h \leq H/2} (n_0+h)^{-\mathrm{Re}(z_0)} \exp( \mathrm{Re} A_i(\zeta,w) ) \frac{|B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2|^{T+1}}{(T+1)!} \exp( |B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2| ).</math> | |||
By binomial expansion, we then have | |||
:<math>F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} | |||
(N_i + h)^{-z_0} \exp( A_i(\zeta,w) ) \sum_{j_1,j_2 \geq 0: j_1+j_2 \leq T} \frac{1}{j_1! j_2!} (B_i(\zeta,w) \varepsilon_{i,h})^{j_1} (w \varepsilon_{i,h}^2)^{j_2} | |||
+ O_{\leq}( E )</math> | |||
which we can rearrange as | |||
:<math>F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{j=0}^{2T} \beta_{i,j,h} \sigma_{i,j}(\zeta,w)+ O_{\leq}( E )</math> | |||
where | |||
:<math> \beta_{i,j,h} := \sum_{-H/2 \leq h \leq H/2} (N_i + h)^{-z_0} \varepsilon_{i,h}^j</math> | |||
and | |||
:<math>\sigma_{i,j}(\zeta,w) := \sum_{j_1,j_2 \geq 0: j_1+2j_2 = j; j_1+j_2 \leq T} \frac{1}{j_1! j_2!} B_i(\zeta,w)^{j_1} w^{j_2}.</math> | |||
The point is that it only requires <math>O( \frac{N}{H} T H )</math> calculations to compute the <math>\beta_{i,j,h}</math>, and <math>O( \frac{N}{H} T M )</math> calculations to compute the <math>\sigma_{i,j}(\zeta,w)</math>, so the total computation time is now | |||
:<math>O( N_0 M + \frac{N}{H} T H + \frac{N}{H} T M + \frac{N}{H} T M ) = O( NM ( \frac{N_0}{N} + \frac{T}{M} + \frac{T}{H} ) )</math> | |||
which can be a significant speedup over <math>O(NM)</math> when <math>N_0 \ll N</math>, <math>T \ll M</math>, and <math>T \ll H</math>. | |||
Now we control the error term <math>E</math>. From the concavity of the logarithm we have | |||
:<math>|\varepsilon_{i,h}| \leq |\varepsilon_{i,-H/2}| = \log \frac{N_i}{N_i-H/2} \leq \log(1 + \frac{H}{2N_0})</math> | |||
and | |||
:<math> |B_i(\zeta,w)| \leq |\zeta| + 2 |w| \log N</math> | |||
and hence | |||
:<math> E \leq \frac{\delta^{T+1}}{(T+1)!} \exp( \delta) \sum_i \exp( \mathrm{Re} A_i(\zeta,w) ) | |||
\sum_{-H/2 \leq h \leq H/2} (N_i+h)^{-\mathrm{Re} z_0} | |||
</math> | |||
where | |||
:<math> \delta := (|\zeta| + 2 |w| \log N) \log(1+\frac{H}{2N_0}) + |w| \log^2(1+\frac{H}{2N_0}).</math> | |||
== Euler2 mollifier in the toy model == | |||
The toy version of preventing <math>H_t(x+iy)</math> from vanishing is that of establishing the inequality | |||
:<math> |\sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}| > N^{-y} |\sum_{n=1}^N \frac{1}{n^{\frac{1-y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}|.</math> | |||
Direct application of the triangle inequality lets us do this if <math>S < 2</math>, where S is the sum | |||
:<math> S := \sum_{n=1}^N \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} + \frac{N^{-y}}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}}.</math> | |||
One can estimate S using the estimate in the first section of this web page. | |||
Now we use a Euler2 mollifier. The lemma here is | |||
'''Lemma''' Let <math>a_1,\dots,a_N</math> be complex numbers, and let <math>b_2</math> be a number such that whenever <math>1 \leq n \leq N</math> is even, <math>b_n a_{n/2}</math> lies on the line segment <math>\{ \theta a_n: 0 \leq \theta \leq 1\}</math> connecting 0 with <math>a_n</math>. Then we have the lower bound | |||
:<math> |1-b_2| |\sum_{n=1}^N a_n| \geq 2 |a_1| - (1-|b_2|) \sum_{n=1}^N |a_n| - 2 |b_2| \sum_{N/2 < n \leq N} |a_n|</math> | |||
and the upper bound | |||
:<math> |1-b_2| |\sum_{n=1}^N a_n| \leq (1-|b_2|) \sum_{n=1}^N |a_n| + 2 |b_2| \sum_{N/2 < n \leq N} |a_n|.</math> | |||
'''Proof''' The left-hand side can be written as | |||
:<math> |\sum_{n=1}^{2N} (1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2)|.</math> | |||
By the triangle inequality, this is bounded above by | |||
:<math> \sum_{n=1}^{2N} |1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2|</math> | |||
and below by | |||
:<math> 2 |a_1| - \sum_{n=1}^{2N} |1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2|.</math> | |||
We have | |||
:<math> \sum_{n=1}^{2N} |1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2| = \sum_{n=1}^N |a_n| - 1_{2|n} |a_{n/2}| |b_2| + \sum_{n=N+1}^{2N} 1_{2|n} |a_{n/2}| |b_2|</math> | |||
which we can rearrange as | |||
:<math> (1 - |b_2|) \sum_{n=1}^N |a_n| + 2 |b_2| \sum_{N/2 < n \leq N} |a_n|</math> | |||
and the claim follows. | |||
We apply this lemma with <math>a_n = \frac{1}{n^{\frac{1 \pm y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}</math> and <math>b_2 = | |||
\frac{1}{2^{\frac{1+y-ix}{2}+\frac{t}{4} \log \frac{N^2}{2} - \frac{\pi i t}{8}}}</math>. The hypotheses of the lemma are easily verified, and we conclude that | |||
:<math> |1-b_2| |\sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}| | |||
\geq 2 - (1-|b_2|) \sum_{n=1}^N \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} - 2 |b_2| | |||
\sum_{N/2 < n \leq N} \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} </math> | |||
and | |||
:<math> |1-b_2| |\sum_{n=1}^N \frac{1}{n^{\frac{1-y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}| | |||
\leq (1-|b_2|) \sum_{n=1}^N \frac{1}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} + 2 |b_2| | |||
\sum_{N/2 < n \leq N} \frac{1}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}}.</math> | |||
This leads to a modified criterion | |||
:<math> (1-|b_2|) S + 2 |b_2| \sum_{N/2 < n \leq N} \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} + \frac{N^{-y}}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}}) < 2.</math> | |||
We gain a factor of <math>(1 - |b_2|)</math> on the main term, but pick up an additional tail term summing over <math>N/2 < n \leq N</math>. This tail term should be small for N large. The summand is monotone decreasing in N so we can majorise it by N/2 times the value at N/2, thus we obtain the simplified criterion | |||
:<math> (1 - |b_2|) S + N |b_2| \frac{1 + 2^{-y}}{(N/2)^{\frac{1+y}{2}+\frac{t}{4} \log(2N)}} < 2. | |||
</math> | |||
[[Category:Polymath15]] | [[Category:Polymath15]] |
Latest revision as of 06:17, 6 July 2018
For any [math]\displaystyle{ \sigma, t\gt 0 }[/math] and natural number [math]\displaystyle{ N }[/math], introduce the sum
- [math]\displaystyle{ S_{\sigma,t}(N) := \sum_{n=1}^N \frac{1}{n^{\sigma + \frac{t}{4} \log\frac{N^2}{n}}}. }[/math]
This sum appears a number of times in the Polymath15 project, and it is therefore of interest to estimate it.
Lemma 1 Let [math]\displaystyle{ \sigma,t\gt 0 }[/math] and [math]\displaystyle{ N \geq N_0 \geq 1 }[/math]. Then
- [math]\displaystyle{ \sum_{n=1}^{N_0} \frac{1}{n^{\sigma + \frac{t}{4} \log\frac{N^2}{n}}} \leq S_{\sigma,t}(N) \leq \sum_{n=1}^{N_0} \frac{1}{n^{\sigma + \frac{t}{4} \log\frac{N^2}{n}}} + \max( N_0^{1-\sigma - \frac{t}{4} \log \frac{N^2}{N_0}}, N^{1-\sigma - \frac{t}{4} \log N} ) \log \frac{N}{N_0}. }[/math]
Proof The left-hand inequality is obvious. To prove the right-hand inequality, observe (from writing the summand as [math]\displaystyle{ \frac{\exp( \frac{t}{4} (\log N - \log n)^2}{n^\sigma \exp(\frac{t}{4} \log^2 N)} }[/math]) that the summand is decreasing for [math]\displaystyle{ 1 \leq n \leq N }[/math], hence by the integral test one has
- [math]\displaystyle{ S_{\sigma,t}(N) \leq \sum_{n=1}^{N_0} \frac{1}{n^{\sigma + \frac{t}{4} \log\frac{N^2}{n}}} + \int_{N_0}^N \frac{1}{a^{\sigma + \frac{t}{4} \log \frac{N^2}{a}}}\ da. }[/math]
Making the change of variables [math]\displaystyle{ a = e^u }[/math], the right-hand side becomes
- [math]\displaystyle{ \sum_{n=1}^{N_0} \frac{1}{n^{\sigma + \frac{t}{4} \log\frac{N^2}{n}}} + \int_{\log N_0}^{\log N} \exp( (1-\sigma) u + \frac{t}{4} (u^2 - 2u \log N) )\ du. }[/math]
The expression [math]\displaystyle{ (1-\sigma) u + \frac{t}{4} (u^2 - 2u \log N) }[/math] is convex in [math]\displaystyle{ u }[/math], and is thus bounded by the maximum of its values at the endpoints [math]\displaystyle{ u = \log N_0, \log N }[/math]; thus
- [math]\displaystyle{ \exp( (1-\sigma) u + \frac{t}{4} (u^2 - 2u \log N) ) \leq \max( N_0^{1-\sigma - \frac{t}{4} \log \frac{N^2}{N_0}}, N^{1-\sigma - \frac{t}{4} \log N} ). }[/math]
The claim follows. [math]\displaystyle{ \Box }[/math]
Thus for instance if [math]\displaystyle{ \sigma = 0.7 }[/math], [math]\displaystyle{ t = 0.4 }[/math], and [math]\displaystyle{ N \geq N_0 = 2000 }[/math], one has
- [math]\displaystyle{ S_{0.7, 0.4}(N) \leq \sum_{n=1}^{2000} \frac{1}{n^{0.7 + 0.1 \log \frac{N^2}{n}}} + \max( 2000^{0.3 - 0.1 \log \frac{N^2}{2000}}, N^{0.3 - 0.1 \log N}) \log \frac{N}{2000}. }[/math]
One can compute numerically that the second term on the RHS is at most 0.0087, thus
- [math]\displaystyle{ S_{0.7, 0.4}(N) \leq \sum_{n=1}^{2000} \frac{1}{n^{0.7 + 0.1 \log \frac{N^2}{n}}} + 0.0087 }[/math]
for all [math]\displaystyle{ N \geq 2000 }[/math]. In particular
- [math]\displaystyle{ S_{0.7, 0.4}(N) \leq S_{0.7, 0.4}(2000) + 0.0087 \leq 1.706. }[/math]
Similarly one has
- [math]\displaystyle{ S_{0.3, 0.4}(N) \leq \sum_{n=1}^{2000} \frac{1}{n^{0.3 + 0.1 \log \frac{N^2}{n}}} + \max( 2000^{0.7 - 0.1 \log \frac{N^2}{2000}}, N^{0.7 - 0.1 \log N}) \log \frac{N}{2000}. }[/math]
The second term can be shown to be at most [math]\displaystyle{ 0.253 }[/math], thus
- [math]\displaystyle{ S_{0.3, 0.4}(N) \leq S_{0.3, 0.4}(2000) + 0.253 \leq 3.469 }[/math]
for all [math]\displaystyle{ N \geq 2000 }[/math].
Evaluating many sums
Let [math]\displaystyle{ N }[/math] be a natural number. For any complex numbers [math]\displaystyle{ z,w }[/math], we define the quantity
- [math]\displaystyle{ F_N(z,w) := \sum_{n=1}^N n^{-z + w \log n}. }[/math]
In particular, the function [math]\displaystyle{ f_t(x+iy) }[/math] in the writeup has the form
- [math]\displaystyle{ f_t(x+iy) = F_N( \frac{1+y-ix}{2} + \frac{t}{2} \alpha(\frac{1+y-ix}{2}), \frac{t}{4} ) + \gamma \overline{F_N( \frac{1-y-ix}{2} + \frac{t}{2} \alpha(\frac{1-y-ix}{2}), \frac{t}{4} )} }[/math]
It is thus of interest to efficiently evaluate [math]\displaystyle{ F_N(z,w) }[/math] for multiple different values of [math]\displaystyle{ z,w }[/math]. We will be particularly interested in the regime where [math]\displaystyle{ w=O(1) }[/math] and [math]\displaystyle{ z = z_0 + \zeta }[/math] for some [math]\displaystyle{ \zeta = O(1) }[/math] and some fixed [math]\displaystyle{ z_0 }[/math] (e.g. [math]\displaystyle{ z_0 = \frac{1-iX}{2} }[/math], thus
- [math]\displaystyle{ F_N(z_0+\zeta, w) = \sum_{n=1}^N n^{-z_0} n^{-\zeta + w \log n}. }[/math]
If one were to naively evaluate [math]\displaystyle{ F_N(-\frac{iX}{2}+\zeta, w) }[/math] at [math]\displaystyle{ M }[/math] different values of [math]\displaystyle{ (z,w) }[/math], this would require about [math]\displaystyle{ O(NM) }[/math] operations. We now seek to use fewer operations to perform these evaluations, at the cost of some accuracy.
We can partition the interval [math]\displaystyle{ \{1,\dots,N\} }[/math] into an initial segment [math]\displaystyle{ \{1,\dots,N_0\} }[/math] and about [math]\displaystyle{ O(N/H) }[/math] segments of the form [math]\displaystyle{ \{ N_i - H/2, \dots, N_i + H/2\} }[/math] for various [math]\displaystyle{ N_i }[/math]. This lets us split
- [math]\displaystyle{ F_N(z_0+\zeta, w) = F_{N_0}(-\frac{iX}{2}+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} (N_i + h)^{-z_0} \exp( - \zeta \log(N_i+h) + w \log^2(N_i+h) ). }[/math]
Writing [math]\displaystyle{ \log(N_i + h) = \log(N_i) + \varepsilon_{i,h} }[/math], where [math]\displaystyle{ \varepsilon_{i,h} := \log(1 + \frac{h}{N_i}) }[/math], we thus have
- [math]\displaystyle{ F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} (N_i + h)^{-z_0} \exp( A_i(\zeta,w) + B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2 ) }[/math]
where
- [math]\displaystyle{ A_i(\zeta,w) := - \zeta \log(N_i) + w \log^2(N_i) }[/math]
and
- [math]\displaystyle{ B_i(\zeta,w) := - \zeta + 2 w \log N_i. }[/math]
From Taylor's theorem with remainder, we have
- [math]\displaystyle{ \exp( a ) = \sum_{j=0}^T \frac{a^j}{j!} + O_{\leq}( \frac{|a|^{T+1}}{(T+1)!} \exp(|a|) ) }[/math]
and hence
- [math]\displaystyle{ F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} (N_i + h)^{-z_0} \exp( A_i(\zeta,w) ) \sum_{j=0}^T \frac{1}{j!} (B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2)^j + O_{\leq}( E ) }[/math]
where [math]\displaystyle{ E }[/math] is the error term
- [math]\displaystyle{ \sum_i \sum_{-H/2 \leq h \leq H/2} (n_0+h)^{-\mathrm{Re}(z_0)} \exp( \mathrm{Re} A_i(\zeta,w) ) \frac{|B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2|^{T+1}}{(T+1)!} \exp( |B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2| ). }[/math]
By binomial expansion, we then have
- [math]\displaystyle{ F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} (N_i + h)^{-z_0} \exp( A_i(\zeta,w) ) \sum_{j_1,j_2 \geq 0: j_1+j_2 \leq T} \frac{1}{j_1! j_2!} (B_i(\zeta,w) \varepsilon_{i,h})^{j_1} (w \varepsilon_{i,h}^2)^{j_2} + O_{\leq}( E ) }[/math]
which we can rearrange as
- [math]\displaystyle{ F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{j=0}^{2T} \beta_{i,j,h} \sigma_{i,j}(\zeta,w)+ O_{\leq}( E ) }[/math]
where
- [math]\displaystyle{ \beta_{i,j,h} := \sum_{-H/2 \leq h \leq H/2} (N_i + h)^{-z_0} \varepsilon_{i,h}^j }[/math]
and
- [math]\displaystyle{ \sigma_{i,j}(\zeta,w) := \sum_{j_1,j_2 \geq 0: j_1+2j_2 = j; j_1+j_2 \leq T} \frac{1}{j_1! j_2!} B_i(\zeta,w)^{j_1} w^{j_2}. }[/math]
The point is that it only requires [math]\displaystyle{ O( \frac{N}{H} T H ) }[/math] calculations to compute the [math]\displaystyle{ \beta_{i,j,h} }[/math], and [math]\displaystyle{ O( \frac{N}{H} T M ) }[/math] calculations to compute the [math]\displaystyle{ \sigma_{i,j}(\zeta,w) }[/math], so the total computation time is now
- [math]\displaystyle{ O( N_0 M + \frac{N}{H} T H + \frac{N}{H} T M + \frac{N}{H} T M ) = O( NM ( \frac{N_0}{N} + \frac{T}{M} + \frac{T}{H} ) ) }[/math]
which can be a significant speedup over [math]\displaystyle{ O(NM) }[/math] when [math]\displaystyle{ N_0 \ll N }[/math], [math]\displaystyle{ T \ll M }[/math], and [math]\displaystyle{ T \ll H }[/math].
Now we control the error term [math]\displaystyle{ E }[/math]. From the concavity of the logarithm we have
- [math]\displaystyle{ |\varepsilon_{i,h}| \leq |\varepsilon_{i,-H/2}| = \log \frac{N_i}{N_i-H/2} \leq \log(1 + \frac{H}{2N_0}) }[/math]
and
- [math]\displaystyle{ |B_i(\zeta,w)| \leq |\zeta| + 2 |w| \log N }[/math]
and hence
- [math]\displaystyle{ E \leq \frac{\delta^{T+1}}{(T+1)!} \exp( \delta) \sum_i \exp( \mathrm{Re} A_i(\zeta,w) ) \sum_{-H/2 \leq h \leq H/2} (N_i+h)^{-\mathrm{Re} z_0} }[/math]
where
- [math]\displaystyle{ \delta := (|\zeta| + 2 |w| \log N) \log(1+\frac{H}{2N_0}) + |w| \log^2(1+\frac{H}{2N_0}). }[/math]
Euler2 mollifier in the toy model
The toy version of preventing [math]\displaystyle{ H_t(x+iy) }[/math] from vanishing is that of establishing the inequality
- [math]\displaystyle{ |\sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}| \gt N^{-y} |\sum_{n=1}^N \frac{1}{n^{\frac{1-y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}|. }[/math]
Direct application of the triangle inequality lets us do this if [math]\displaystyle{ S \lt 2 }[/math], where S is the sum
- [math]\displaystyle{ S := \sum_{n=1}^N \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} + \frac{N^{-y}}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}}. }[/math]
One can estimate S using the estimate in the first section of this web page.
Now we use a Euler2 mollifier. The lemma here is
Lemma Let [math]\displaystyle{ a_1,\dots,a_N }[/math] be complex numbers, and let [math]\displaystyle{ b_2 }[/math] be a number such that whenever [math]\displaystyle{ 1 \leq n \leq N }[/math] is even, [math]\displaystyle{ b_n a_{n/2} }[/math] lies on the line segment [math]\displaystyle{ \{ \theta a_n: 0 \leq \theta \leq 1\} }[/math] connecting 0 with [math]\displaystyle{ a_n }[/math]. Then we have the lower bound
- [math]\displaystyle{ |1-b_2| |\sum_{n=1}^N a_n| \geq 2 |a_1| - (1-|b_2|) \sum_{n=1}^N |a_n| - 2 |b_2| \sum_{N/2 \lt n \leq N} |a_n| }[/math]
and the upper bound
- [math]\displaystyle{ |1-b_2| |\sum_{n=1}^N a_n| \leq (1-|b_2|) \sum_{n=1}^N |a_n| + 2 |b_2| \sum_{N/2 \lt n \leq N} |a_n|. }[/math]
Proof The left-hand side can be written as
- [math]\displaystyle{ |\sum_{n=1}^{2N} (1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2)|. }[/math]
By the triangle inequality, this is bounded above by
- [math]\displaystyle{ \sum_{n=1}^{2N} |1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2| }[/math]
and below by
- [math]\displaystyle{ 2 |a_1| - \sum_{n=1}^{2N} |1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2|. }[/math]
We have
- [math]\displaystyle{ \sum_{n=1}^{2N} |1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2| = \sum_{n=1}^N |a_n| - 1_{2|n} |a_{n/2}| |b_2| + \sum_{n=N+1}^{2N} 1_{2|n} |a_{n/2}| |b_2| }[/math]
which we can rearrange as
- [math]\displaystyle{ (1 - |b_2|) \sum_{n=1}^N |a_n| + 2 |b_2| \sum_{N/2 \lt n \leq N} |a_n| }[/math]
and the claim follows.
We apply this lemma with [math]\displaystyle{ a_n = \frac{1}{n^{\frac{1 \pm y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}} }[/math] and [math]\displaystyle{ b_2 = \frac{1}{2^{\frac{1+y-ix}{2}+\frac{t}{4} \log \frac{N^2}{2} - \frac{\pi i t}{8}}} }[/math]. The hypotheses of the lemma are easily verified, and we conclude that
- [math]\displaystyle{ |1-b_2| |\sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}| \geq 2 - (1-|b_2|) \sum_{n=1}^N \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} - 2 |b_2| \sum_{N/2 \lt n \leq N} \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} }[/math]
and
- [math]\displaystyle{ |1-b_2| |\sum_{n=1}^N \frac{1}{n^{\frac{1-y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}| \leq (1-|b_2|) \sum_{n=1}^N \frac{1}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} + 2 |b_2| \sum_{N/2 \lt n \leq N} \frac{1}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}}. }[/math]
This leads to a modified criterion
- [math]\displaystyle{ (1-|b_2|) S + 2 |b_2| \sum_{N/2 \lt n \leq N} \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} + \frac{N^{-y}}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}}) \lt 2. }[/math]
We gain a factor of [math]\displaystyle{ (1 - |b_2|) }[/math] on the main term, but pick up an additional tail term summing over [math]\displaystyle{ N/2 \lt n \leq N }[/math]. This tail term should be small for N large. The summand is monotone decreasing in N so we can majorise it by N/2 times the value at N/2, thus we obtain the simplified criterion
- [math]\displaystyle{ (1 - |b_2|) S + N |b_2| \frac{1 + 2^{-y}}{(N/2)^{\frac{1+y}{2}+\frac{t}{4} \log(2N)}} \lt 2. }[/math]