Controlling A+B/B 0
Some numerical data on [math]\displaystyle{ |A+B/B_0| }[/math] source and also [math]\displaystyle{ \mathrm{Re} \frac{A+B}{B_0} }[/math] source, using a step size of 1 for [math]\displaystyle{ x }[/math], suggesting that this ratio tends to oscillate roughly between 0.5 and 3 for medium values of [math]\displaystyle{ x }[/math]:
range of [math]\displaystyle{ x }[/math] | minimum value | max value | average value | standard deviation | min real part | max real part |
---|---|---|---|---|---|---|
0-1000 | 0.179 | 4.074 | 1.219 | 0.782 | -0.09 | 4.06 |
1000-2000 | 0.352 | 4.403 | 1.164 | 0.712 | 0.02 | 4.43 |
2000-3000 | 0.352 | 4.050 | 1.145 | 0.671 | 0.15 | 3.99 |
3000-4000 | 0.338 | 4.174 | 1.134 | 0.640 | 0.34 | 4.48 |
4000-5000 | 0.386 | 4.491 | 1.128 | 0.615 | 0.33 | 4.33 |
5000-6000 | 0.377 | 4.327 | 1.120 | 0.599 | 0.377 | 4.327 |
[math]\displaystyle{ 1-10^5 }[/math] | 0.179 | 4.491 | 1.077 | 0.455 | -0.09 | 4.48 |
[math]\displaystyle{ 10^5-2 \times 10^5 }[/math] | 0.488 | 3.339 | 1.053 | 0.361 | 0.48 | 3.32 |
[math]\displaystyle{ 2 \times 10^5-3 \times 10^5 }[/math] | 0.508 | 3.049 | 1.047 | 0.335 | 0.50 | 3.00 |
[math]\displaystyle{ 3 \times 10^5-4 \times 10^5 }[/math] | 0.517 | 2.989 | 1.043 | 0.321 | 0.52 | 2.97 |
[math]\displaystyle{ 4 \times 10^5-5 \times 10^5 }[/math] | 0.535 | 2.826 | 1.041 | 0.310 | 0.53 | 2.82 |
[math]\displaystyle{ 5 \times 10^5-6 \times 10^5 }[/math] | 0.529 | 2.757 | 1.039 | 0.303 | 0.53 | 2.75 |
[math]\displaystyle{ 6 \times 10^5-7 \times 10^5 }[/math] | 0.548 | 2.728 | 1.038 | 0.296 | 0.55 | 2.72 |
Here is a computation on the magnitude [math]\displaystyle{ |\frac{d}{dx}(B'/B'_0)| }[/math] of the derivative of [math]\displaystyle{ B'/B'_0 }[/math], sampled at steps of 1 in [math]\displaystyle{ x }[/math] source, together with a crude upper bound coming from the triangle inequality source, to give some indication of the oscillation:
range of [math]\displaystyle{ T=x/2 }[/math] | max value | average value | standard deviation | triangle inequality bound |
---|---|---|---|---|
0-1000 | 1.04 | 0.33 | 0.19 | |
1000-2000 | 1.25 | 0.39 | 0.24 | |
2000-3000 | 1.31 | 0.39 | 0.25 | |
3000-4000 | 1.39 | 0.38 | 0.27 | |
4000-5000 | 1.64 | 0.37 | 0.26 | |
5000-6000 | 1.60 | 0.36 | 0.27 | |
6000-7000 | 1.61 | 0.36 | 0.26 | |
7000-8000 | 1.55 | 0.36 | 0.27 | |
8000-9000 | 1.65 | 0.34 | 0.26 | |
9000-10000 | 1.47 | 0.34 | 0.26 | |
[math]\displaystyle{ 1-10^5 }[/math] | 1.78 | 0.28 | 0.23 | 2.341 |
[math]\displaystyle{ 10^5-2 \times 10^5 }[/math] | 1.66 | 0.22 | 0.18 | 2.299 |
[math]\displaystyle{ 2 \times 10^5-3 \times 10^5 }[/math] | 1.55 | 0.20 | 0.17 | 2.195 |
[math]\displaystyle{ 3 \times 10^5-4 \times 10^5 }[/math] | 1.53 | 0.19 | 0.16 | 2.109 |
[math]\displaystyle{ 4 \times 10^5-5 \times 10^5 }[/math] | 1.31 | 0.18 | 0.15 | 2.039 |
[math]\displaystyle{ 5 \times 10^5-6 \times 10^5 }[/math] | 1.34 | 0.18 | 0.14 | |
[math]\displaystyle{ 6 \times 10^5-7 \times 10^5 }[/math] | 1.33 | 0.17 | 0.14 |
In the toy case, we have
- [math]\displaystyle{ \frac{|A^{toy}+B^{toy}|}{|B^{toy}_0|} \geq |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| }[/math]
where [math]\displaystyle{ b_n := \exp( \frac{t}{4} \log^2 n) }[/math], [math]\displaystyle{ a_n := (n/N)^{y} b_n }[/math], and [math]\displaystyle{ s := \frac{1+y+ix}{2} + \frac{t}{2} \log N + \frac{\pi i t}{8} }[/math]. For the effective approximation one has
- [math]\displaystyle{ \frac{|A^{eff}+B^{eff}|}{|B^{eff}_0|} \geq |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \quad (2.1) }[/math]
where now [math]\displaystyle{ b_n := \exp( \frac{t}{4} \log^2 n) }[/math], [math]\displaystyle{ s := \frac{1+y+ix}{2} + \frac{t}{2} \alpha_1(\frac{1+y+ix}{2}) }[/math], and
- [math]\displaystyle{ a_n := |\frac{\exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 ) H_{0,1}( \frac{1-y+ix}{2} )}{ \exp( \frac{t}{4} \alpha_1(\frac{1+y+ix}{2})^2 ) H_{0,1}( \frac{1+y+ix}{2} ) }| n^{y - \frac{t}{2} \alpha_1(\frac{1-y+ix}{2}) + \frac{t}{2} \alpha_1(\frac{1+y+ix}{2}) )} b_n. }[/math]
It is thus of interest to obtain lower bounds for expressions of the form
- [math]\displaystyle{ |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| }[/math]
in situations where [math]\displaystyle{ b_1=1 }[/math] is expected to be a dominant term.
From the triangle inequality one obtains the lower bound
- [math]\displaystyle{ |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \geq 1 - |a_1| - \sum_{n=2}^N \frac{|a_n|+|b_n|}{n^\sigma} }[/math]
where [math]\displaystyle{ \sigma := \frac{1+y}{2} + \frac{t}{2} \log N }[/math] is the real part of [math]\displaystyle{ s }[/math]. There is a refinement:
Lemma 1 If [math]\displaystyle{ a_n,b_n }[/math] are real coefficients with [math]\displaystyle{ b_1 = 1 }[/math] and [math]\displaystyle{ 0 \leq a_1 \lt 1 }[/math] we have
- [math]\displaystyle{ |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \geq 1 - a_1 - \sum_{n=2}^N \frac{\max( |b_n-a_n|, \frac{1-a_1}{1+a_1} |b_n+a_n|)}{n^\sigma}. }[/math]
Proof By a continuity argument we may assume without loss of generality that the left-hand side is positive, then we may write it as
- [math]\displaystyle{ |\sum_{n=1}^N \frac{b_n - e^{i\theta} a_n}{n^s}| }[/math]
for some phase [math]\displaystyle{ \theta }[/math]. By the triangle inequality, this is at least
- [math]\displaystyle{ |1 - e^{i\theta} a_1| - \sum_{n=2}^N \frac{|b_n - e^{i\theta} a_n|}{n^\sigma}. }[/math]
We factor out [math]\displaystyle{ |1 - e^{i\theta} a_1| }[/math], which is at least [math]\displaystyle{ 1-a_1 }[/math], to obtain the lower bound
- [math]\displaystyle{ (1-a_1) (1 - \sum_{n=2}^N \frac{|b_n - e^{i\theta} a_n| / |1 - e^{i\theta} a_1|}{n^\sigma}). }[/math]
By the cosine rule, we have
- [math]\displaystyle{ (|b_n - e^{i\theta} a_n| / |1 - e^{i\theta} a_1|)^2 = \frac{b_n^2 + a_n^2 - 2 a_n b_n \cos \theta}{1 + a_1^2 -2 a_1 \cos \theta}. }[/math]
This is a fractional linear function of [math]\displaystyle{ \cos \theta }[/math] with no poles in the range [math]\displaystyle{ [-1,1] }[/math] of [math]\displaystyle{ \cos \theta }[/math]. Thus this function is monotone on this range and attains its maximum at either [math]\displaystyle{ \cos \theta=+1 }[/math] or [math]\displaystyle{ \cos \theta = -1 }[/math]. We conclude that
- [math]\displaystyle{ \frac{|b_n - e^{i\theta} a_n|}{|1 - e^{i\theta} a_1|} \leq \max( \frac{|b_n-a_n|}{1-a_1}, \frac{|b_n+a_n|}{1+a_1} ) }[/math]
and the claim follows.
We can also mollify the [math]\displaystyle{ a_n,b_n }[/math]:
Lemma 2 If [math]\displaystyle{ \lambda_1,\dots,\lambda_D }[/math] are complex numbers, then
- [math]\displaystyle{ |\sum_{d=1}^D \frac{\lambda_d}{d^s}| (|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}|) = ( |\sum_{n=1}^{DN} \frac{\tilde b_n}{n^s}| - |\sum_{n=1}^{DN} \frac{\tilde a_n}{n^s}| ) }[/math]
where
- [math]\displaystyle{ \tilde a_n := \sum_{d=1}^D 1_{n \leq dN} 1_{d|n} \lambda_d a_{n/d} }[/math]
- [math]\displaystyle{ \tilde b_n := \sum_{d=1}^D 1_{n \leq dN} 1_{d|n} \lambda_d b_{n/d} }[/math]
Proof This is immediate from the Dirichlet convolution identities
- [math]\displaystyle{ (\sum_{d=1}^D \frac{\lambda_d}{d^s}) \sum_{n=1}^N \frac{a_n}{n^s} = \sum_{n=1}^N \frac{\tilde a_n}{n^s} }[/math]
and
- [math]\displaystyle{ (\sum_{d=1}^D \frac{\lambda_d}{d^s}) \sum_{n=1}^N \frac{b_n}{n^s} = \sum_{n=1}^N \frac{\tilde b_n}{n^s}. }[/math]
[math]\displaystyle{ \Box }[/math]
Combining the two lemmas, we see for instance that we can show [math]\displaystyle{ |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \gt 0 }[/math] whenever can find [math]\displaystyle{ \lambda_1,\dots,\lambda_D }[/math] with [math]\displaystyle{ \lambda_1=1 }[/math] and
- [math]\displaystyle{ \sum_{n=2}^N \frac{\max( \frac{|\tilde b_n-\tilde a_n|}{1-a_1}, \frac{|\tilde b_n+ \tilde a_n|}{1+a_1})}{n^\sigma} \lt 1. }[/math]
A usable choice of mollifier seems to be the Euler products
- [math]\displaystyle{ \sum_{d=1}^D \frac{\lambda_d}{d^s} := \prod_{p \leq P} (1 - \frac{b_p}{p^s}) }[/math]
which are designed to kill off the first few [math]\displaystyle{ \tilde b_n }[/math] coefficients.
Analysing the toy model
With regards to the toy problem of showing [math]\displaystyle{ A^{toy}+B^{toy} }[/math] does not vanish, here are the least values of [math]\displaystyle{ N }[/math] for which this method works source source source source:
[math]\displaystyle{ P }[/math] in Euler product | [math]\displaystyle{ N }[/math] using triangle inequality | [math]\displaystyle{ N }[/math] using Lemma 1 |
---|---|---|
1 | 1391 | 1080 |
2 | 478 | 341 |
3 | 322 | 220 |
5 | 282 | 192 |
7 | 180 | |
11 | 176 |
Dropping the [math]\displaystyle{ \lambda_6 }[/math] term from the [math]\displaystyle{ P=3 }[/math] Euler factor worsens the 220 threshold slightly to 235 source.
Analysing the effective model
The differences between the toy model and the effective model are:
- The real part [math]\displaystyle{ \sigma }[/math] of [math]\displaystyle{ s }[/math] is now [math]\displaystyle{ \frac{1+y}{2} + \frac{t}{2} \mathrm{Re} \alpha_1(\frac{1+y+ix}{2}) }[/math] rather than [math]\displaystyle{ \frac{1+y}{2} + \frac{t}{2} \log N }[/math]. (The imaginary part of [math]\displaystyle{ s }[/math] also changes somewhat.)
- The coefficient [math]\displaystyle{ a_n }[/math] is now given by
- [math]\displaystyle{ a_n = \lambda n^{y + \frac{t}{2} (\alpha_1(\frac{1+y+ix}{2}) - \alpha_1(\frac{1-y+ix}{2}))} b_n }[/math]
rather than [math]\displaystyle{ a_n = N^{-y} n^y b_n }[/math], where
- [math]\displaystyle{ \lambda := |\frac{\exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 H_{0,1}( \frac{1-y+ix}{2})}{\exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 H_{0,1}( \frac{1-y+ix}{2})}|. }[/math]
Two complications arise here compared with the toy model: firstly, [math]\displaystyle{ \sigma,a_n }[/math] now depend on [math]\displaystyle{ x }[/math] and not just on [math]\displaystyle{ N }[/math], and secondly the [math]\displaystyle{ a_n }[/math] are not quite real-valued making it more difficult to apply Lemma 1.
However we have good estimates for [math]\displaystyle{ \sigma,a_n }[/math] that depend only on [math]\displaystyle{ N }[/math]. Note that
- [math]\displaystyle{ 2\pi N^2 \leq T' \lt 2\pi (N+1)^2 }[/math]
and hence
- [math]\displaystyle{ x_N \leq x \lt x_{N+1} }[/math]
where
- [math]\displaystyle{ x_N := 4\pi N^2 - \frac{\pi t}{4}. }[/math]
To control [math]\displaystyle{ \sigma }[/math], it suffices to obtain lower bounds because our criteria (both the triangle inequality and Lemma 1) become harder to satisfy when [math]\displaystyle{ \sigma }[/math] decreases. We compute
- [math]\displaystyle{ \sigma = \frac{1+y}{2} + \frac{t}{2} \mathrm{Re}(\frac{1}{1+y+ix} + \frac{2}{-1+y+ix} + \frac{1}{2} \log \frac{1+y+ix}{4\pi}) }[/math]
- [math]\displaystyle{ = \frac{1+y}{2} + \frac{t}{2} (\frac{1+y}{(1+y)^2+x^2} + \frac{-2+2y}{(-1+y)^2+x^2} + \frac{1}{2} \log \frac{|1+y+ix|}{4\pi}) }[/math]
- [math]\displaystyle{ \geq \frac{1+y}{2} + \frac{t}{2} (\frac{1+y}{(-1+y)^2+x^2} + \frac{-2+2y}{(-1+y)^2+x^2} + \frac{1}{2} \log \frac{x}{4\pi}) }[/math]
- [math]\displaystyle{ \geq \frac{1+y}{2} + \frac{t}{2} (\frac{3y-1}{(-1+y)^2+x^2} + \log N) }[/math]
- [math]\displaystyle{ \geq \frac{1+y}{2} + \frac{t}{2} \log N }[/math]
assuming that [math]\displaystyle{ y \geq 1/3 }[/math]. Hence we can actually just use the same value of [math]\displaystyle{ \sigma }[/math] as in the toy case.
Next we control [math]\displaystyle{ \lambda }[/math]. Note that we can increase [math]\displaystyle{ \lambda }[/math] (thus multiplying [math]\displaystyle{ \sum_{n=1}^N \frac{a_n}{n^s} }[/math] by a quantity greater than 1) without affecting (2.1), so we just need upper bounds on [math]\displaystyle{ \lambda }[/math]. We may factor
- [math]\displaystyle{ \lambda = \exp( \frac{t}{4} \mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) + \mathrm{Re}( f(\frac{1-y+ix}{2}) - f(\frac{1+y+ix}{2} ) ) }[/math]
where
- [math]\displaystyle{ f(s) := -\frac{s}{2} \log \pi + (\frac{s}{2} - \frac{1}{2}) \log \frac{s}{2} - \frac{s}{2}. }[/math]
By the mean value theorem, we have
- [math]\displaystyle{ \mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) = -2 y \alpha_1(s') \alpha'_1(s') }[/math]
for some [math]\displaystyle{ s_1 }[/math] between [math]\displaystyle{ \frac{1-y+ix}{2} }[/math] and [math]\displaystyle{ \frac{1+iy}{2} }[/math]. We have
- [math]\displaystyle{ \alpha_1(s_1) = \frac{1}{2s_1} + \frac{1}{s_1-1} + \frac{1}{2} \log \frac{s_1}{2\pi} }[/math]
- [math]\displaystyle{ = O_{\leq}(\frac{1}{x}) + O_{\leq}(\frac{1}{x/2}) + \frac{1}{2} \log \frac{|s_1|}{2\pi} + O_{\leq}(\frac{\pi}{4}) }[/math]
- [math]\displaystyle{ = O_{\leq}( \frac{\pi}{4} + \frac{3}{x_N}) + \frac{1}{2} O_{\leq}^{\mathbf{R}}( \log \frac{|1+y+ix_{N+1}|}{4\pi} ) }[/math]
and
- [math]\displaystyle{ \alpha'_1(s_1) = -\frac{1}{2s_1^2} + \frac{1}{(s_1-1)^2} + \frac{1}{2s_1} }[/math]
- [math]\displaystyle{ = O_{\leq}(\frac{1}{x^2/2}) + O_{\leq}(\frac{1}{x^2/4}) + \frac{1}{2s_1} }[/math]
- [math]\displaystyle{ = O_{\leq}(\frac{6}{x_N^2}) + \frac{1}{2s_1} }[/math]
- [math]\displaystyle{ = O_{\leq}(\frac{6}{x_N^2}) + O_{\leq}( \frac{1}{x_N} ). }[/math]
Thus one has
- [math]\displaystyle{ \mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) = 2y O_{\leq}( (\frac{\pi}{4} + \frac{3}{x_N}) (\frac{1}{x_N} + \frac{6}{x_N^2}) ) }[/math]
- [math]\displaystyle{ + 2y O_{\leq}( \log \frac{|1+y+ix_{N+1}|}{4\pi} (\frac{6}{x_N^2} + |\mathrm{Re} \frac{1}{2s'}|) ) }[/math]
Now we have
- [math]\displaystyle{ \mathrm{Re} \frac{1}{2s'} = \frac{\mathrm{Re}(s')}{2|s'|^2} }[/math]
- [math]\displaystyle{ \leq \frac{1+y}{x^2} }[/math]
- [math]\displaystyle{ \leq \frac{1+y}{x_N^2}; }[/math]
also
- [math]\displaystyle{ (\frac{\pi}{4} + \frac{3}{x_N}) (\frac{1}{x_N} + \frac{6}{x_N^2}) \leq \frac{\pi}{4} (1 + \frac{12/\pi}{x_N}) \frac{1}{x_N-6} }[/math]
- [math]\displaystyle{ \leq \frac{\pi}{4} ( \frac{1}{x_N-6} + \frac{12/\pi}{(x_N-6)^2} ) }[/math]
- [math]\displaystyle{ \leq \frac{\pi}{4} \frac{1}{x_N - 6 - 12/\pi}. }[/math]
We conclude that
- [math]\displaystyle{ \mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) = O_{\leq}(\frac{\pi y}{2 (x_N - 6 - 12/\pi)} + \frac{2y(7+y)}{x_N^2} \log \frac{|1+y+ix_{N+1}|}{4\pi}). }[/math]
In a similar vein, from the mean value theorem we have
- [math]\displaystyle{ \mathrm{Re}( f(\frac{1-y+ix}{2}) - f(\frac{1+y+ix}{2} ) = -y \mathrm{Re} f'(s_2) }[/math]
for some [math]\displaystyle{ s_2 }[/math] between [math]\displaystyle{ \frac{1-y+ix}{2} }[/math] and [math]\displaystyle{ \frac{1+y+ix}{2} }[/math]. We have
- [math]\displaystyle{ \mathrm{Re} f'(s_2) = -\frac{1}{2} \log \pi + \frac{1}{2} \log \frac{|s_2|}{2} - \mathrm{Re} \frac{1}{2s_2} }[/math]
- [math]\displaystyle{ = \frac{1}{2} \log \frac{|s_2|}{2\pi} + O_{\leq}(\frac{\mathrm{Re}(s_2)}{2|s_2|^2}) }[/math]
- [math]\displaystyle{ \geq \log N + O_{\leq}(\frac{1+y}{x^2}) }[/math]
- [math]\displaystyle{ \geq \log N + O_{\leq}(\frac{1+y}{x_N^2}) }[/math]
and thus
- [math]\displaystyle{ \lambda \leq N^{-y} \exp( \frac{\pi y}{2 (x_N - 6 - 12/\pi)} + \frac{2y(7+y)}{x_N^2} \log \frac{|1+y+ix_{N+1}|}{4\pi} + \frac{y(1+y)}{x_N^2} ) }[/math]
- [math]\displaystyle{ \leq e^\delta N^{-y} }[/math]
where
- [math]\displaystyle{ \delta := \frac{\pi y}{2 (x_N - 6 - \frac{14+2y}{\pi})} + \frac{2y(7+y)}{x_N^2} \log \frac{|1+y+ix_{N+1}|}{4\pi} }[/math]
Asymptotically we have
- [math]\displaystyle{ \delta = \frac{\pi y}{2 x_N} + O( \frac{\log x_N}{x_N^2} ) = O( \frac{1}{x_N} ). }[/math]
Now we control [math]\displaystyle{ \alpha_1(\frac{1+y+ix}{2}) - \alpha_1(\frac{1-y+ix}{2}) }[/math]. By the mean-value theorem we have
- [math]\displaystyle{ \alpha_1(\frac{1+y+ix}{2}) - \alpha_1(\frac{1-y+ix}{2}) = O_{\leq}( y |\alpha'_1(s_3)|) }[/math]
for some [math]\displaystyle{ s_3 }[/math] between [math]\displaystyle{ \frac{1+y+ix}{2} }[/math] and [math]\displaystyle{ \frac{1-y+ix}{2} }[/math]. As before we have
- [math]\displaystyle{ \alpha'_1(s_3) = -\frac{1}{2s_3^2} - \frac{1}{(s_3-1)^2} + \frac{1}{2s_3} }[/math]
- [math]\displaystyle{ = O_{\leq}( \frac{1}{x^2/2} + \frac{1}{x^2/4} + \frac{1}{x} ) }[/math]
- [math]\displaystyle{ = O_{\leq}( \frac{1}{x_N} + \frac{6}{x_N^2} ) }[/math]
- [math]\displaystyle{ = O_{\leq}( \frac{1}{x_N-6} ). }[/math]
We conclude that (after replacing [math]\displaystyle{ \lambda }[/math] with [math]\displaystyle{ e^\delta N^{-y} }[/math])
- [math]\displaystyle{ a_n = (n/N)^y \exp( \delta + O_{\leq}( \frac{t y \log n}{2(x_N-6)} ) ) b_n. }[/math]
The triangle inequality argument will thus give [math]\displaystyle{ A^{eff}+B^{eff} }[/math] non-zero as long as
- [math]\displaystyle{ \sum_{n=1}^N (1 + (n/N)^y \exp( \delta + \frac{t y \log n}{2(x_N-6)} ) ) \frac{b_n}{n^\sigma} \lt 2. }[/math]
The situation with using Lemma 1 is a bit more complicated because [math]\displaystyle{ a_n }[/math] is not quite real. We can write [math]\displaystyle{ a_n = e^\delta a_n^{toy} + O_{\leq}( e_n ) }[/math] where
- [math]\displaystyle{ a_n^{toy} := (n/N)^y b_n }[/math]
and
- [math]\displaystyle{ e_n := e^\delta (n/N)^y (\exp( \frac{t y \log n}{2(x_N-6)} ) - 1) b_n }[/math]
and then by Lemma 1 and the triangle inequality we can make [math]\displaystyle{ A^{eff}+B^{eff} }[/math] non-zero as long as
- [math]\displaystyle{ a_1^{toy} + \sum_{n=2}^N \frac{\max( |b_n-a_n^{toy}|, \frac{1-a_1^{toy}}{1+a_1^{toy}} |b_n + a_n^{toy}|}{n^\sigma} + \sum_{n=1}^N \frac{e_n}{n^\sigma} \lt 1. }[/math]