# Difference between revisions of "Estimating a sum"

Jump to: navigation, search

For any $\sigma, t\gt0$ and natural number $N$, introduce the sum

$S_{\sigma,t}(N) := \sum_{n=1}^N \frac{1}{n^{\sigma + \frac{t}{4} \log\frac{N^2}{n}}}.$

This sum appears a number of times in the Polymath15 project, and it is therefore of interest to estimate it.

Lemma 1 Let $\sigma,t\gt0$ and $N \geq N_0 \geq 1$. Then

$\sum_{n=1}^{N_0} \frac{1}{n^{\sigma + \frac{t}{4} \log\frac{N^2}{n}}} \leq S_{\sigma,t}(N) \leq \sum_{n=1}^{N_0} \frac{1}{n^{\sigma + \frac{t}{4} \log\frac{N^2}{n}}} + \max( N_0^{1-\sigma - \frac{t}{4} \log \frac{N^2}{N_0}}, N^{1-\sigma - \frac{t}{4} \log N} ) \log \frac{N}{N_0}.$

Proof The left-hand inequality is obvious. To prove the right-hand inequality, observe (from writing the summand as $\frac{\exp( \frac{t}{4} (\log N - \log n)^2}{n^\sigma \exp(\frac{t}{4} \log^2 N)}$) that the summand is decreasing for $1 \leq n \leq N$, hence by the integral test one has

$S_{\sigma,t}(N) \leq \sum_{n=1}^{N_0} \frac{1}{n^{\sigma + \frac{t}{4} \log\frac{N^2}{n}}} + \int_{N_0}^N \frac{1}{a^{\sigma + \frac{t}{4} \log \frac{N^2}{a}}}\ da.$

Making the change of variables $a = e^u$, the right-hand side becomes

$\sum_{n=1}^{N_0} \frac{1}{n^{\sigma + \frac{t}{4} \log\frac{N^2}{n}}} + \int_{\log N_0}^{\log N} \exp( (1-\sigma) u + \frac{t}{4} (u^2 - 2u \log N) )\ du.$

The expression $(1-\sigma) u + \frac{t}{4} (u^2 - 2u \log N)$ is convex in $u$, and is thus bounded by the maximum of its values at the endpoints $u = \log N_0, \log N$; thus

$\exp( (1-\sigma) u + \frac{t}{4} (u^2 - 2u \log N) ) \leq \max( N_0^{1-\sigma - \frac{t}{4} \log \frac{N^2}{N_0}}, N^{1-\sigma - \frac{t}{4} \log N} ).$

The claim follows. $\Box$

Thus for instance if $\sigma = 0.7$, $t = 0.4$, and $N \geq N_0 = 2000$, one has

$S_{0.7, 0.4}(N) \leq \sum_{n=1}^{2000} \frac{1}{n^{0.7 + 0.1 \log \frac{N^2}{n}}} + \max( 2000^{0.3 - 0.1 \log \frac{N^2}{2000}}, N^{0.3 - 0.1 \log N}) \log \frac{N}{2000}.$

One can compute numerically that the second term on the RHS is at most 0.0087, thus

$S_{0.7, 0.4}(N) \leq \sum_{n=1}^{2000} \frac{1}{n^{0.7 + 0.1 \log \frac{N^2}{n}}} + 0.0087$

for all $N \geq 2000$. In particular

$S_{0.7, 0.4}(N) \leq S_{0.7, 0.4}(2000) + 0.0087 \leq 1.706.$

Similarly one has

$S_{0.3, 0.4}(N) \leq \sum_{n=1}^{2000} \frac{1}{n^{0.3 + 0.1 \log \frac{N^2}{n}}} + \max( 2000^{0.7 - 0.1 \log \frac{N^2}{2000}}, N^{0.7 - 0.1 \log N}) \log \frac{N}{2000}.$

The second term can be shown to be at most $0.253$, thus

$S_{0.3, 0.4}(N) \leq S_{0.3, 0.4}(2000) + 0.253 \leq 3.469$

for all $N \geq 2000$.

### Evaluating many sums

Let $N$ be a natural number. For any complex numbers $z,w$, we define the quantity

$F_N(z,w) := \sum_{n=1}^N n^{-z + w \log n}.$

In particular, the function $f_t(x+iy)$ in the writeup has the form

$f_t(x+iy) = F_N( \frac{1+y-ix}{2} + \frac{t}{2} \alpha(\frac{1+y-ix}{2}), \frac{t}{4} ) + \gamma \overline{F_N( \frac{1-y-ix}{2} + \frac{t}{2} \alpha(\frac{1-y-ix}{2}), \frac{t}{4} )}$

It is thus of interest to efficiently evaluate $F_N(z,w)$ for multiple different values of $z,w$. We will be particularly interested in the regime where $w=O(1)$ and $z = z_0 + \zeta$ for some $\zeta = O(1)$ and some fixed $z_0$ (e.g. $z_0 = \frac{1-iX}{2}$, thus

$F_N(z_0+\zeta, w) = \sum_{n=1}^N n^{-z_0} n^{-\zeta + w \log n}.$

If one were to naively evaluate $F_N(-\frac{iX}{2}+\zeta, w)$ at $M$ different values of $(z,w)$, this would require about $O(NM)$ operations. We now seek to use fewer operations to perform these evaluations, at the cost of some accuracy.

We can partition the interval $\{1,\dots,N\}$ into an initial segment $\{1,\dots,N_0\}$ and about $O(N/H)$ segments of the form $\{ N_i - H/2, \dots, N_i + H/2\}$ for various $N_i$. This lets us split

$F_N(z_0+\zeta, w) = F_{N_0}(-\frac{iX}{2}+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} (N_i + h)^{-z_0} \exp( - \zeta \log(N_i+h) + w \log^2(N_i+h) ).$

Writing $\log(N_i + h) = \log(N_i) + \varepsilon_{i,h}$, where $\varepsilon_{i,h} := \log(1 + \frac{h}{N_i})$, we thus have

$F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} (N_i + h)^{-z_0} \exp( A_i(\zeta,w) + B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2 )$

where

$A_i(\zeta,w) := - \zeta \log(N_i) + w \log^2(N_i)$

and

$B_i(\zeta,w) := - \zeta + 2 w \log N_i.$

From Taylor's theorem with remainder, we have

$\exp( a ) = \sum_{j=0}^T \frac{a^j}{j!} + O_{\leq}( \frac{|a|^{T+1}}{(T+1)!} \exp(|a|) )$

and hence

$F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} (N_i + h)^{-z_0} \exp( A_i(\zeta,w) ) \sum_{j=0}^T \frac{1}{j!} (B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2)^j + O_{\leq}( E )$

where $E$ is the error term

$\sum_i \sum_{-H/2 \leq h \leq H/2} (n_0+h)^{-\mathrm{Re}(z_0)} \exp( \mathrm{Re} A_i(\zeta,w) ) \frac{|B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2|^{T+1}}{(T+1)!} \exp( |B_i(\zeta,w) \varepsilon_{i,h} + w \varepsilon_{i,h}^2| ).$

By binomial expansion, we then have

$F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{-H/2 \leq h \leq H/2} (N_i + h)^{-z_0} \exp( A_i(\zeta,w) ) \sum_{j_1,j_2 \geq 0: j_1+j_2 \leq T} \frac{1}{j_1! j_2!} (B_i(\zeta,w) \varepsilon_{i,h})^{j_1} (w \varepsilon_{i,h}^2)^{j_2} + O_{\leq}( E )$

which we can rearrange as

$F_N(z_0+\zeta, w) = F_{N_0}(z_0+\zeta, w) + \sum_i \sum_{j=0}^{2T} \beta_{i,j,h} \sigma_{i,j}(\zeta,w)+ O_{\leq}( E )$

where

$\beta_{i,j,h} := \sum_{-H/2 \leq h \leq H/2} (N_i + h)^{-z_0} \varepsilon_{i,h}^j$

and

$\sigma_{i,j}(\zeta,w) := \sum_{j_1,j_2 \geq 0: j_1+2j_2 = j; j_1+j_2 \leq T} \frac{1}{j_1! j_2!} B_i(\zeta,w)^{j_1} w^{j_2}.$

The point is that it only requires $O( \frac{N}{H} T H )$ calculations to compute the $\beta_{i,j,h}$, and $O( \frac{N}{H} T M )$ calculations to compute the $\sigma_{i,j}(\zeta,w)$, so the total computation time is now

$O( N_0 M + \frac{N}{H} T H + \frac{N}{H} T M + \frac{N}{H} T M ) = O( NM ( \frac{N_0}{N} + \frac{T}{M} + \frac{T}{H} ) )$

which can be a significant speedup over $O(NM)$ when $N_0 \ll N$, $T \ll M$, and $T \ll H$.

Now we control the error term $E$. From the concavity of the logarithm we have

$|\varepsilon_{i,h}| \leq |\varepsilon_{i,-H/2}| = \log \frac{N_i}{N_i-H/2} \leq \log(1 + \frac{H}{2N_0})$

and

$|B_i(\zeta,w)| \leq |\zeta| + 2 |w| \log N$

and hence

$E \leq \frac{\delta^{T+1}}{(T+1)!} \exp( \delta) \sum_i \exp( \mathrm{Re} A_i(\zeta,w) ) \sum_{-H/2 \leq h \leq H/2} (N_i+h)^{-\mathrm{Re} z_0}$

where

$\delta := (|\zeta| + 2 |w| \log N) \log(1+\frac{H}{2N_0}) + |w| \log^2(1+\frac{H}{2N_0}).$

## Euler2 mollifier in the toy model

The toy version of preventing $H_t(x+iy)$ from vanishing is that of establishing the inequality

$|\sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}| \gt N^{-y} |\sum_{n=1}^N \frac{1}{n^{\frac{1-y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}|.$

Direct application of the triangle inequality lets us do this if $S \lt 2$, where S is the sum

$S := \sum_{n=1}^N \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} + \frac{N^{-y}}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}}.$

One can estimate S using the estimate in the first section of this web page.

Now we use a Euler2 mollifier. The lemma here is

Lemma Let $a_1,\dots,a_N$ be complex numbers, and let $b_2$ be a number such that whenever $1 \leq n \leq N$ is even, $b_n a_{n/2}$ lies on the line segment $\{ \theta a_n: 0 \leq \theta \leq 1\}$ connecting 0 with $a_n$. Then we have the lower bound

$|1-b_2| |\sum_{n=1}^N a_n| \geq 2 |a_1| - (1-|b_2|) \sum_{n=1}^N |a_n| - 2 |b_2| \sum_{N/2 \lt n \leq N} |a_n|$

and the upper bound

$|1-b_2| |\sum_{n=1}^N a_n| \leq (1-|b_2|) \sum_{n=1}^N |a_n| + 2 |b_2| \sum_{N/2 \lt n \leq N} |a_n|.$

Proof The left-hand side can be written as

$|\sum_{n=1}^{2N} (1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2)|.$

By the triangle inequality, this is bounded above by

$\sum_{n=1}^{2N} |1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2|$

and below by

$2 |a_1| - \sum_{n=1}^{2N} |1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2|.$

We have

$\sum_{n=1}^{2N} |1_{n \leq N} a_n - 1_{2|n} a_{n/2} b_2| = \sum_{n=1}^N |a_n| - 1_{2|n} |a_{n/2}| |b_2| + \sum_{n=N+1}^{2N} 1_{2|n} |a_{n/2}| |b_2|$

which we can rearrange as

$(1 - |b_2|) \sum_{n=1}^N |a_n| + 2 |b_2| \sum_{N/2 \lt n \leq N} |a_n|$

and the claim follows.

We apply this lemma with $a_n = \frac{1}{n^{\frac{1 \pm y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}$ and $b_2 = \frac{1}{2^{\frac{1+y-ix}{2}+\frac{t}{4} \log \frac{N^2}{2} - \frac{\pi i t}{8}}}$. The hypotheses of the lemma are easily verified, and we conclude that

$|1-b_2| |\sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}| \geq 2 - (1-|b_2|) \sum_{n=1}^N \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} - 2 |b_2| \sum_{N/2 \lt n \leq N} \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}}$

and

$|1-b_2| |\sum_{n=1}^N \frac{1}{n^{\frac{1-y-ix}{2}+\frac{t}{4} \log \frac{N^2}{n} - \frac{\pi i t}{8}}}| \leq (1-|b_2|) \sum_{n=1}^N \frac{1}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} + 2 |b_2| \sum_{N/2 \lt n \leq N} \frac{1}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}}.$

This leads to a modified criterion

$(1-|b_2|) S + 2 |b_2| \sum_{N/2 \lt n \leq N} \frac{1}{n^{\frac{1+y}{2}+\frac{t}{4} \log \frac{N^2}{n}}} + \frac{N^{-y}}{n^{\frac{1-y}{2}+\frac{t}{4} \log \frac{N^2}{n}}}) \lt 2.$

We gain a factor of $(1 - |b_2|)$ on the main term, but pick up an additional tail term summing over $N/2 \lt n \leq N$. This tail term should be small for N large. The summand is monotone decreasing in N so we can majorise it by N/2 times the value at N/2, thus we obtain the simplified criterion

$(1 - |b_2|) S + N |b_2| \frac{1 + 2^{-y}}{(N/2)^{\frac{1+y}{2}+\frac{t}{4} \log(2N)}} \lt 2.$