Polymath15 test problem: Difference between revisions

From Polymath Wiki
Jump to navigationJump to search
 
(58 intermediate revisions by the same user not shown)
Line 21: Line 21:
== Choices of approximation ==
== Choices of approximation ==


There are a number of slightly different approximations we have used in previous discussion.  The first approximation was
There are a number of slightly different approximations we have used in previous discussion.  The first approximation was <math>A+B</math>, where


:<math>A := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{\exp(\frac{t}{16} \log^2 \frac{s+4}{2\pi n^2})}{n^s}</math>
:<math>A := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{\exp(\frac{t}{16} \log^2 \frac{s+4}{2\pi n^2})}{n^s}</math>
Line 29: Line 29:
:<math>N := \lfloor \sqrt{\frac{\mathrm{Im} s}{2\pi}} \rfloor = \lfloor \sqrt{\frac{x}{4\pi}} \rfloor.</math>
:<math>N := \lfloor \sqrt{\frac{\mathrm{Im} s}{2\pi}} \rfloor = \lfloor \sqrt{\frac{x}{4\pi}} \rfloor.</math>


This was modified slightly to
There is also the refinement <math>A+B-C</math>, where
:<math> C:= \frac{1}{8} \exp(-\frac{t\pi^2}{64}) \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} (2\pi i N)^{s-1} \Psi( \frac{s}{2\pi i N}-N )</math>
:<math> \Psi(\alpha) := 2\pi \frac{\cos \pi(\frac{1}{2} \alpha^2 - \alpha - \frac{\pi}{8})}{\cos(\pi \alpha)} \exp( \frac{i \pi}{2} \alpha^2 - \frac{5 \pi i}{8}).</math>
 
The first approximation was modified slightly to <math>A'+B'</math>, where


:<math>A' := \frac{2}{8} \pi^{-s/2} \sqrt{2\pi} \exp( (\frac{s+4}{2}-\frac{1}{2}) \log \frac{s+4}{2} - \frac{s+4}{2}) \sum_{n=1}^N \frac{\exp(\frac{t}{16} \log^2 \frac{s+4}{2\pi n^2})}{n^s}</math>
:<math>A' := \frac{2}{8} \pi^{-s/2} \sqrt{2\pi} \exp( (\frac{s+4}{2}-\frac{1}{2}) \log \frac{s+4}{2} - \frac{s+4}{2}) \sum_{n=1}^N \frac{\exp(\frac{t}{16} \log^2 \frac{s+4}{2\pi n^2})}{n^s}</math>
Line 37: Line 41:
:<math>N := \lfloor \sqrt{\frac{\mathrm{Im} s}{2\pi}} \rfloor = \lfloor \sqrt{\frac{x}{4\pi}} \rfloor.</math>
:<math>N := \lfloor \sqrt{\frac{\mathrm{Im} s}{2\pi}} \rfloor = \lfloor \sqrt{\frac{x}{4\pi}} \rfloor.</math>


In [[Effective bounds on H_t - second approach]], a more refined approximation was introduced:
In [[Effective bounds on H_t - second approach]], a more refined approximation <math>A^{eff} + B^{eff}</math> was introduced:


:<math> A^{eff} := \frac{1}{8} \exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 ) H_{0,1}(\frac{1-y+ix}{2}) \sum_{n=1}^N \frac{1}{n^{\frac{1-y+ix}{2} + \frac{t \alpha_1(\frac{1-y+ix}{2})}{2} - \frac{t}{4} \log n}}</math>
:<math> A^{eff} := \frac{1}{8} \exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 ) H_{0,1}(\frac{1-y+ix}{2}) \sum_{n=1}^N \frac{1}{n^{\frac{1-y+ix}{2} + \frac{t \alpha_1(\frac{1-y+ix}{2})}{2} - \frac{t}{4} \log n}}</math>
:<math> B^{eff} := \frac{1}{8} \exp( \frac{t}{4} \overline{\alpha_1(\frac{1+y+ix}{2})}^2 ) \overline{H_{0,1}(\frac{1+y+ix}{2})} \sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2} + \frac{t \overline{\alpha_1(\frac{1+y+ix}{2})}}{2} - \frac{t}{4} \log n}}</math>
:<math> B^{eff} := \frac{1}{8} \exp( \frac{t}{4} \overline{\alpha_1(\frac{1+y+ix}{2})}^2 ) \overline{H_{0,1}(\frac{1+y+ix}{2})} \sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2} + \frac{t \overline{\alpha_1(\frac{1+y+ix}{2})}}{2} - \frac{t}{4} \log n}}</math>
:<math> B^{eff} := \frac{1}{8} \exp( \frac{t}{4} \overline{\alpha_1(\frac{1+y+ix}{2})}^2 ) \overline{H_{0,1}(\frac{1+y+ix}{2})} </math>
:<math> B^{eff}_0 := \frac{1}{8} \exp( \frac{t}{4} \overline{\alpha_1(\frac{1+y+ix}{2})}^2 ) \overline{H_{0,1}(\frac{1+y+ix}{2})} </math>
:<math>H_{0,1}(s) := \frac{s (s-1)}{2} \pi^{-s/2} \sqrt{2\pi} \exp( (\frac{s}{2} - \frac{1}{2}) \log \frac{s}{2} - \frac{s}{2} )</math>
:<math>H_{0,1}(s) := \frac{s (s-1)}{2} \pi^{-s/2} \sqrt{2\pi} \exp( (\frac{s}{2} - \frac{1}{2}) \log \frac{s}{2} - \frac{s}{2} )</math>
:<math> \alpha_1(s) := \frac{1}{2s} + \frac{1}{s-1} + \frac{1}{2} \log \frac{s}{2\pi} </math>
:<math> \alpha_1(s) := \frac{1}{2s} + \frac{1}{s-1} + \frac{1}{2} \log \frac{s}{2\pi} </math>
Line 47: Line 51:
:<math> T' := \frac{x}{2} + \frac{\pi t}{8}.</math>
:<math> T' := \frac{x}{2} + \frac{\pi t}{8}.</math>


Finally, a simplified approximation is
There is a refinement <math>A^{eff}+B^{eff}-C^{eff}</math>, where
:<math>C^{eff} := \frac{1}{8} \exp( \frac{t\pi^2}{64}) \frac{s'(s'-1)}{2} (-1)^N ( \pi^{-s'/2} \Gamma(s'/2) a^{-\sigma} C_0(p) U + \pi^{-(1-s')/2} \Gamma((1-s')/2) a^{-(1-\sigma)} \overline{C_0(p)} \overline{U})</math>
:<math>s' := \frac{1-y}{2} + iT' = \frac{1-y+ix}{2} + \frac{\pi i t}{8} </math>
:<math>a := \sqrt{\frac{T'}{2\pi}}</math>
:<math>p := 1 - 2(a-N)</math>
:<math>\sigma := \mathrm{Re} s' = \frac{1-y}{2}</math>
:<math>U := \exp( -i (\frac{T'}{2} \log \frac{T'}{2\pi} - \frac{T'}{2} - \frac{\pi}{8} ))</math>
:<math>C_0(p) := \frac{ \exp( \pi i (p^2/2 + 3/8) )- i \sqrt{2} \cos(\pi p/2)}{2 \cos(\pi p)}.</math>
 
One can also replace <math>C^{eff}</math> by the very slightly different quantity
:<math>\tilde C^{eff} :=\frac{2 e^{-\pi i y/8}}{8} \exp( \frac{t\pi^2}{64}) (-1)^N \mathrm{Re}( H_{0,1}(iT') C_0(p) U e^{\pi i/8} ).</math>
 
Finally, a simplified approximation is <math>A^{toy} + B^{toy}</math>, where


:<math> A^{toy} := B^{toy}_0 \exp(i ((\frac{x}{2} + \frac{\pi t}{8}) \log \frac{x}{4\pi} - \frac{x}{2} - \frac{\pi}{4} )) N^{-y} \sum_{n=1}^N \frac{1}{n^{\frac{1-y+ix}{2} + \frac{t}{4} \log \frac{N^2}{n} + \pi i t/8}}</math>
:<math> A^{toy} := B^{toy}_0 \exp(i ((\frac{x}{2} + \frac{\pi t}{8}) \log \frac{x}{4\pi} - \frac{x}{2} - \frac{\pi}{4} )) N^{-y} \sum_{n=1}^N \frac{1}{n^{\frac{1-y+ix}{2} + \frac{t}{4} \log \frac{N^2}{n} + \pi i t/8}}</math>
Line 177: Line 193:
|}
|}


== Controlling |A+B|/|B_0| ==
Here some typical values of <math>C/B_0</math>, which is significantly smaller than either <math>A/B_0</math> or <math>B/B_0</math>:


Some numerical data on <math>|A+B/B_0|</math> [https://terrytao.wordpress.com/2018/02/02/polymath15-second-thread-generalising-the-riemann-siegel-approximate-functional-equation/#comment-492341 source] and also <math>\mathrm{Re} \frac{A+B}{B_0}</math> [https://terrytao.wordpress.com/2018/02/02/polymath15-second-thread-generalising-the-riemann-siegel-approximate-functional-equation/#comment-492445 source], using a step size of 1 for <math>x</math>, suggesting that this ratio tends to oscillate roughly between 0.5 and 3 for medium values of <math>x</math>:


{| border=1
{| border=1
|-
|-
! style="text-align:left;"| range of <math>x</math>
! style="text-align:left;"| <math>x</math>
! minimum value
! <math>C/B_0</math>
! max value
! <math>C^{eff}/B^{eff}_0</math>
! average value
! standard deviation
! min real part
! max real part
|-
|-
|0-1000
| <math>10^3</math>
|0.179
| <math>-0.1183 + 0.0697i</math>
|4.074
| <math>-0.0581 + 0.0823
|1.219
i</math>
|0.782
| -0.09
|4.06
|-
|-
|1000-2000
| <math>10^4</math>
|0.352
| <math>-0.0001 - 0.0184 i</math>
|4.403
| <math>-0.0001 - 0.0172 i</math>
|1.164
|0.712
|0.02
|4.43
|-
|-
|2000-3000
| <math>10^5</math>
|0.352
| <math>-0.0033 - 0.0005i</math>
|4.050
| <math>-0.0031 - 0.0005i</math>
|1.145
|0.671
|0.15
|3.99
|-
|-
|3000-4000
| <math>10^6</math>
|0.338
| <math>-0.0001 - 0.0006 i</math>
|4.174
| <math>-0.0001 - 0.0006 i</math>
|1.134
|0.640
|0.34
|4.48
|-
|-
|4000-5000
| <math>10^7</math>
|0.386
| <math>-0.0000 - 0.0001 i</math>
|4.491
| <math>-0.0000 - 0.0001 i</math>
|1.128
|0.615
|0.33
|4.33
|-
|5000-6000
|0.377
|4.327
|1.120
|0.599
|0.377
|4.327
|-
|<math>1-10^5</math>
|0.179
|4.491
|1.077
|0.455
| -0.09
|4.48
|-
|<math>10^5-2 \times 10^5</math>
|0.488
|3.339
|1.053
|0.361
|0.48
|3.32
|-
|<math>2 \times 10^5-3 \times 10^5</math>
|0.508
|3.049
|1.047
|0.335
|0.50
|3.00
|-
|<math>3 \times 10^5-4 \times 10^5</math>
|0.517
|2.989
|1.043
|0.321
|0.52
|2.97
|-
|<math>4 \times 10^5-5 \times 10^5</math>
|0.535
|2.826
|1.041
|0.310
|0.53
|2.82
|-
|<math>5 \times 10^5-6 \times 10^5</math>
|0.529
|2.757
|1.039
|0.303
|0.53
|2.75
|-
|<math>6 \times 10^5-7 \times 10^5</math>
|0.548
|2.728
|1.038
|0.296
|0.55
|2.72
|}
|}


Here is a computation on the magnitude <math>|\frac{d}{dx}(B'/B'_0)|</math> of the derivative of <math>B'/B'_0</math>, sampled at steps of 1 in <math>x</math> [https://terrytao.wordpress.com/2018/02/12/polymath15-third-thread-computing-and-approximating-h_t/#comment-492682 source], together with a crude upper bound coming from the triangle inequality [https://terrytao.wordpress.com/2018/02/12/polymath15-third-thread-computing-and-approximating-h_t/#comment-492710 source], to give some indication of the oscillation:
Some values of <math>H_t</math> and its approximations at small values of <math>x</math> [https://terrytao.wordpress.com/2018/03/02/polymath15-fifth-thread-finishing-off-the-test-problem/#comment-493456 source] [https://terrytao.wordpress.com/2018/03/02/polymath15-fifth-thread-finishing-off-the-test-problem/#comment-493715 source]:


{| border=1
{| border=1
|-
|-
! style="text-align:left;"| range of <math>T=x/2</math>
! style="text-align:left;"| <math>x</math>
! max value
! <math>H_t</math>
! average value
! <math>A+B</math>
! standard deviation
! <math>A'+B'</math>
! triangle inequality bound
! <math>A^{eff}+B^{eff}</math>
! <math>A^{toy}+B^{toy}</math>
! <math>A+B-C</math>
! <math>A^{eff}+B^{eff}-C^{eff}</math>
|-
|-
|0-1000
| <math>10</math>
|1.04
| <math>(3.442 - 0.168 i) \times 10^{-2}</math>
|0.33
| 0
|0.19
| 0
| 0
| N/A
| N/A
| <math>(3.501 - 0.316 i) \times 10^{-2}</math>
|-
|-
|1000-2000
| <math>30</math>
|1.25
| <math>(-1.000 - 0.071 i) \times 10^{-4}</math>
|0.39
| <math>(-0.650 - 0.188 i) \times 10^{-4}</math>
|0.24
| <math>(-0.211 - 0.192 i) \times 10^{-4}</math>
| <math>(-0.670 - 0.114 i) \times 10^{-4}</math>
| <math>(-0.136 + 0.021 i) \times 10^{-4}</math>
| <math>(-1.227 - 0.058 i) \times 10^{-4}</math>
| <math>(-1.032 - 0.066 i) \times 10^{-4}</math>
|-
|-
|2000-3000
| <math>100</math>
|1.31
| <math>(6.702 + 3.134 i) \times 10^{-16}</math>
|0.39
| <math>(2.890 + 3.667 i) \times 10^{-16}</math>
|0.25
| <math>(2.338 + 3.742 i) \times 10^{-16}</math>
| <math>(2.955 + 3.650 i) \times 10^{-16}</math>
| <math>(0.959 + 0.871 i) \times 10^{-16}</math>
| <math>(6.158 + 12.226 i) \times 10^{-16}</math>
| <math>(6.763 + 3.074 i) \times 10^{-16} </math>
|-
|-
|3000-4000
| <math>300</math>
|1.39
| <math>(-4.016 - 1.401 i) \times 10^{-49}</math>
|0.38
| <math>(-5.808 - 1.140 i) \times 10^{-49}</math>
|0.27
| <math>(-5.586 - 1.228 i) \times 10^{-49}</math>
| <math>(-5.824 - 1.129 i) \times 10^{-49}</math>
| <math>(-2.677 - 0.327 i) \times 10^{-49}</math>
| <math>(-3.346 + 6.818 i) \times 10^{-49}</math>
| <math>(-4.032 - 1.408 i) \times 10^{-49}</math>
|-
|-
|4000-5000
| <math>1000</math>
|1.64
| <math>(0.015 + 3.051 i) \times 10^{-167}</math>
|0.37
| <math>(-0.479 + 3.126 i) \times 10^{-167}</math>
|0.26
| <math>(-0.516 + 3.135 i) \times 10^{-167}</math>
| <math>(-0.474 + 3.124 i) \times 10^{-167}</math>
| <math>(-0.406 + 2.051 i) \times 10^{-167}</math>
| <math>(0.175 + 3.306 i) \times 10^{-167}</math>
| <math>(0.017 + 3.047 i) \times 10^{-167}</math>
|-
|-
|5000-6000
| <math>3000</math>
|1.60
| <math>(-1.144+ 1.5702 i) 10^{-507}</math>
|0.36
| <math>(-1.039+ 1.5534 i) 10^{-507}</math>
|0.27
| <math>(-1.039+ 1.5552 i) 10^{-507}</math>
| <math>(-1.038+ 1.5535 i) 10^{-507}</math>
| <math>(-0.925+ 1.3933 i) 10^{-507}</math>
| <math>(-1.155+ 1.5686 i) 10^{-507}</math>
| <math>(-1.144+ 1.5701 i) 10^{-507}</math>
|-
|-
|6000-7000
| <math>10000</math>
|1.61
| <math>(-0.558 - 4.088 i) \times 10^{-1700}</math>
|0.36
| <math>(-0.692 - 4.067 i) \times 10^{-1700}</math>
|0.26
| <math>(-0.687 - 4.067 i) \times 10^{-1700}</math>
| <math>(-0.692 - 4.066 i) \times 10^{-1700}</math>
| <math>(-0.673 - 3.948 i) \times 10^{-1700}</math>
| <math>(-0.548 - 4.089 i) \times 10^{-1700}</math>
| <math>(-0.558 - 4.088 i) \times 10^{-1700}</math>
|-
|-
|7000-8000
| <math>30000</math>
|1.55
| <math>(3.160 - 6.737) \times 10^{-5110}</math>
|0.36
| <math>(3.065 - 6.722) \times 10^{-5100}</math>
|0.27
| <math>(3.066 - 6.722) \times 10^{-5100}</math>
|-
| <math>(3.065 - 6.722) \times 10^{-5100}</math>
|8000-9000
| <math>(2.853 - 6.286) \times 10^{-5100}</math>
|1.65
| <math>(3.170 - 6.733) \times 10^{-5100}</math>
|0.34
| <math>(3.160 - 6.737) \times 10^{-5100}</math>
|0.26
|-
|9000-10000
|1.47
|0.34
|0.26
|-
|<math>1-10^5</math>
|1.78
|0.28
|0.23
|2.341
|-
|<math>10^5-2 \times 10^5</math>
|1.66
|0.22
|0.18
|2.299
|-
|<math>2 \times 10^5-3 \times 10^5</math>
|1.55
|0.20
|0.17
|2.195
|-
|<math>3 \times 10^5-4 \times 10^5</math>
|1.53
|0.19
|0.16
|2.109
|-
|<math>4 \times 10^5-5 \times 10^5</math>
|1.31
|0.18
|0.15
|2.039
|-
|<math>5 \times 10^5-6 \times 10^5</math>
|1.34
|0.18
|0.14
|-
|<math>6 \times 10^5-7 \times 10^5</math>
|1.33
|0.17
|0.14
|}
|}


== Controlling |A+B|/|B_0| ==


See [[Controlling A+B/B_0]].


In the toy case, we have
Mesh evaluations of <math>A^{eff}+B^{eff}/B^{eff}_0</math> in the ranges


:<math>\frac{|A^{toy}+B^{toy}|}{|B^{toy}_0|} \geq |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}|</math>
* [https://drive.google.com/open?id=1qbkvCBIt_OHnrtcDJWQvN_FUt-1zPv4m N between 11 and 19]
* [https://drive.google.com/open?id=1YBIA5gRv2DUXX74MLwfn9F2J0QJBWUh_ N between 20 and 150] ([https://drive.google.com/open?id=1ZBX7jNGXhQZQ50t8UX4boTPW93dmRAun raw data]
* [https://drive.google.com/file/d/1NvEv-1R4KTEchWMbJCpZA6Uf1xLkiYWM/view N between 151 and 300]
* [https://drive.google.com/open?id=15Xf9GsaAzydl-39zyG9nei5aZyaHwsFe The (A+B)/B0 mesh data for N=300 to 20, y=0.45, t=0.4, c=0.065]
* [https://drive.google.com/open?id=1kK8tV2bRfACm1lUKRFIktca58ZV8L8U3  c=0.26 for N=7 to 19, y=0.4,t=0.4]
* https://drive.google.com/open?id=13_mzqvtaZCghmj7oAZtDRXnkb2zbxQH3 c=0.26 for N=19 to 7, y=0.45,t=0.4]


where <math>b_n := \exp( \frac{t}{4} \log^2 n)</math>, <math>a_n := (n/N)^{y} b_n</math>, and <math>s := \frac{1+y+ix}{2} + \frac{t}{2} \log N + \frac{\pi i t}{8}</math>. For the effective approximation one has
[https://github.com/km-git-acc/dbn_upper_bound/blob/master/dbn_upper_bound/python/research/mod_abbeff_lower_Nbounds.csv Here is a table of analytic lower bounds for <math>A^{eff}+B^{eff}/B^{eff}_0</math> for <math>3 \leq N \leq 2000</math>].


:<math>\frac{|A^{eff}+B^{eff}|}{|B^{eff}_0|} \geq |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \quad (2.1)</math>
== Controlling |H_t-A-B|/|B_0| ==


where now <math>b_n := \exp( \frac{t}{4} \log^2 n)</math>, <math>s := \frac{1+y+ix}{2} + \frac{t}{2} \alpha_1(\frac{1+y+ix}{2})</math>, and
See [[Controlling H_t-A-B/B_0]].
:<math>a_n := |\frac{\exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 ) H_{0,1}( \frac{1-y+ix}{2} )}{ \exp( \frac{t}{4} \alpha_1(\frac{1+y+ix}{2})^2 ) H_{0,1}( \frac{1+y+ix}{2} ) }| n^{y - \frac{t}{2} \alpha_1(\frac{1-y+ix}{2}) + \frac{t}{2} \alpha_1(\frac{1+y+ix}{2}) )} b_n.</math>


It is thus of interest to obtain lower bounds for expressions of the form
[https://github.com/km-git-acc/dbn_upper_bound/blob/master/dbn_upper_bound/python/research/bounded_normalized_E1_and_E2_and_E3_and_overall_error.csv Here is a table on bounds on error terms <math>E_1/B^{eff}_0, E_2/B^{eff}_0, E_3^*/B^{eff}_0</math> for N=3 to 2000].  [https://github.com/km-git-acc/dbn_upper_bound/blob/master/dbn_upper_bound/python/research/e1_e2_e3_sharper_Nbound.csv Here is a table] with some sharpened estimates from the PDF writeup.
:<math>|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}|</math>
in situations where <math>b_1=1</math> is expected to be a dominant term.


From the triangle inequality one obtains the lower bound
[https://ibb.co/b7baZc Here is a graph] depicting <math>|H_t-A^{eff}-B^{eff}/B_0^{eff}|</math> and <math>E_1+E_2+E_3^*/|B_0^{eff}|</math> for <math>x \leq 1600</math>.


:<math>|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \geq 1 - |a_1| - \sum_{n=2}^N \frac{|a_n|+|b_n|}{n^\sigma}</math>
== Small values of x ==


where <math>\sigma := \frac{1+y}{2} + \frac{t}{2} \log N</math> is the real part of <math>s</math>.  There is a refinement:
Tables of <math>H_t(x+iy)</math> for small values of <math>x</math>:


'''Lemma 1'''  If <math>a_n,b_n</math> are real coefficients with <math>b_1 = 1</math> and <math>0 \leq a_1 < 1</math> we have
* [https://github.com/km-git-acc/dbn_upper_bound/tree/master/dbn_upper_bound/python/research/H_t%20at%20small%20x x=0 to x=300 with step size 0.1]
:<math>|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \geq 1 - a_1 - \sum_{n=2}^N \frac{\max( |b_n-a_n|, \frac{1-a_1}{1+a_1} |b_n+a_n|)}{n^\sigma}.</math>
* [https://pastebin.com/fim2swFu x=200 to x=600 with step size 0.1]
* [https://pastebin.com/jvSvDP69 x=600 to x=1000 with step size 0.1]
* [https://pastebin.com/NkFKs3pB x=1000 to x=1300 with step size 0.1]
* [https://pastebin.com/k7vC6e7n x=1300 to x=1600 with step size 0.1]
* [https://gist.githubusercontent.com/p15-git-acc/3ada0ff0b9ec77e23cb7cace0dcb8691/raw/807e1b0a16356a9bbd2a5af872f71bc064830c38/gistfile1.txt x=20 to x=1000, adaptive mesh]


'''Proof'''  By a continuity argument we may assume without loss of generality that the left-hand side is positive, then we may write it as
:<math> |\sum_{n=1}^N \frac{b_n - e^{i\theta} a_n}{n^s}|</math>
for some phase <math>\theta</math>.  By the triangle inequality, this is at least
:<math> |1 - e^{i\theta} a_1| - \sum_{n=2}^N \frac{|b_n - e^{i\theta} a_n|}{n^\sigma}.</math>
We factor out <math>|1 - e^{i\theta} a_1|</math>, which is at least <math>1-a_1</math>, to obtain the lower bound
:<math> (1-a_1) (1 - \sum_{n=2}^N \frac{|b_n - e^{i\theta} a_n| / |1 - e^{i\theta} a_1|}{n^\sigma}).</math>
By the cosine rule, we have
:<math> (|b_n - e^{i\theta} a_n| / |1 - e^{i\theta} a_1|)^2 = \frac{b_n^2 + a_n^2 - 2 a_n b_n \cos \theta}{1 + a_1^2 -2 a_1 \cos \theta}.</math>
This is a fractional linear function of <math>\cos \theta</math> with no poles in the range <math>[-1,1]</math> of <math>\cos \theta</math>.  Thus this function is monotone on this range and attains its maximum at either <math>\cos \theta=+1</math> or <math>\cos \theta = -1</math>.  We conclude that
:<math> \frac{|b_n - e^{i\theta} a_n|}{|1 - e^{i\theta} a_1|} \leq \max( \frac{|b_n-a_n|}{1-a_1}, \frac{|b_n+a_n|}{1+a_1} )</math>
and the claim follows.


We can also mollify the <math>a_n,b_n</math>:
[https://ibb.co/fOroa7 Here are some snapshots of <math>H_t/B^{eff}_0</math>].


'''Lemma 2'''  If <math>\lambda_1,\dots,\lambda_D</math> are complex numbers, then
In this range we will need [[Bounding the derivative of H_t]] or [[Bounding the derivative of H_t - second approach]] or [[Bounding the derivative of H_t - third approach]].
:<math>|\sum_{d=1}^D \frac{\lambda_d}{d^s}| (|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}|) =
( |\sum_{n=1}^{DN} \frac{\tilde b_n}{n^s}| - |\sum_{n=1}^{DN} \frac{\tilde a_n}{n^s}| )</math>
where
:<math>\tilde a_n := \sum_{d=1}^D 1_{n \leq dN} 1_{d|n} \lambda_d a_{n/d}</math>
:<math>\tilde b_n := \sum_{d=1}^D 1_{n \leq dN} 1_{d|n} \lambda_d b_{n/d}</math>


'''Proof''' This is immediate from the Dirichlet convolution identities
Tables of <math>H'_t(x+iy)</math>:
:<math>(\sum_{d=1}^D \frac{\lambda_d}{d^s}) \sum_{n=1}^N \frac{a_n}{n^s} = \sum_{n=1}^N \frac{\tilde a_n}{n^s}</math>
and
:<math>(\sum_{d=1}^D \frac{\lambda_d}{d^s}) \sum_{n=1}^N \frac{b_n}{n^s} = \sum_{n=1}^N \frac{\tilde b_n}{n^s}.</math>
<math>\Box</math>


Combining the two lemmas, we see for instance that we can show <math>|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| > 0</math> whenever can find <math>\lambda_1,\dots,\lambda_D</math> with <math>\lambda_1=1</math> and
* [https://drive.google.com/open?id=1mtzrJ_-hBMyt90gVzJdNCq4RU6Pz_dQQ x=0 to x=100 with step size 0.1]
:<math>\sum_{n=2}^N \frac{\max( \frac{|\tilde b_n-\tilde a_n|}{1-a_1}, \frac{|\tilde b_n+ \tilde a_n|}{1+a_1})}{n^\sigma} < 1.</math>
* [https://drive.google.com/open?id=1wSZJgoPp9-6C8CJqLr5g3E1uhEBHKmTJ x=100 to x=200 with step size 0.1]
A usable choice of mollifier seems to be the Euler products
* [https://drive.google.com/open?id=1q3h0x8jSF0Z1KQ9iLx23JI1oCXfrx0RL x=200 to x=300 with step size 0.1]
:<math>\sum_{d=1}^D \frac{\lambda_d}{d^s} := \prod_{p \leq P} (1 - \frac{b_p}{p^s})</math>
which are designed to kill off the first few <math>\tilde b_n</math> coefficients.


[https://drive.google.com/open?id=1ge0TD5hvs1O6BKLAz34JmTqxSrguvjsJ Here is a table] of <math>x</math>, pari/gp prec, <math>H_{t}, H^{'}_{t}, |H_{t}|, |H^{'}_{t}|, \frac{|H_{t}|}{|B_{0}^{eff}|}, \frac{|H^{'}_{t}|}{|B_{0}^{eff}|}</math> for x=0 to x=30 with step size 0.01.


=== Analysing the toy model ===


With regards to the toy problem of showing <math>A^{toy}+B^{toy}</math> does not vanish, here are the least values of <math>N</math> for which this method works [https://terrytao.wordpress.com/2018/02/24/polymath15-fourth-thread-closing-in-on-the-test-problem/#comment-493109 source] [https://terrytao.wordpress.com/2018/02/24/polymath15-fourth-thread-closing-in-on-the-test-problem/#comment-493152 source] [https://terrytao.wordpress.com/2018/02/24/polymath15-fourth-thread-closing-in-on-the-test-problem/#comment-493184 source] [https://terrytao.wordpress.com/2018/02/24/polymath15-fourth-thread-closing-in-on-the-test-problem/#comment-493220 source]:


{| border=1
[https://drive.google.com/open?id=1855iryE-7uDDEyW7hXJ5-3Njomsz-mMH Here is a plot] of <math>H_t/B_0</math> for a rectangle <math> \{x+iy: 0 \leq x \leq 300; 0.4 \leq y \leq 0.45\}</math>.  Here is [https://drive.google.com/open?id=1oGQ4HfXlEiC5WUnWHOzAt5EJQlEg9SRb an adaptive mesh plot]; here is a [https://drive.google.com/open?id=12zkFFXBF7H6Mjd1KBmLhZw10io3J_wer closeup near the origin].
|-
! style="text-align:left;"| <math>P</math> in Euler product
! <math>N</math> using triangle inequality
! <math>N</math> using Lemma 1
|-
|1
|1391
|1080
|-
|2
|478
|341
|-
|3
|322
|220
|-
|5
|282
|192
|-
|7
|
|180
|-
|11
|
|176
|}


Dropping the <math>\lambda_6</math> term from the <math>P=3</math> Euler factor improves the 220 threshold slightly to 213 [https://terrytao.wordpress.com/2018/02/24/polymath15-fourth-thread-closing-in-on-the-test-problem/#comment-493184 source].
Here is a [https://pastebin.com/TiFk6CfF script] for verifying the absence of zeroes of <math>H_t</math> in a rectangle.  It can eliminate zeros in the rectangle <math>\{0 \leq x \leq 1000, 0.4 \leq y \leq 0.45\}</math> when t = 0.4.


=== Analysing the effective model ===


The differences between the toy model and the effective model are:
== Large negative values of <math>t</math> ==


* The real part <math>\sigma</math> of <math>s</math> is now <math>\frac{1+y}{2} + \frac{t}{2} \mathrm{Re} \alpha_1(\frac{1+y+ix}{2})</math> rather than <math>\frac{1+y}{2} + \frac{t}{2} \log N</math>.  (The imaginary part of <math>s</math> also changes somewhat.)
See also [[Second attempt at computing H_t(x) for negative t]].
* The coefficient <math>a_n</math> is now given by
:<math> a_n = \lambda n^{y + \frac{t}{2} (\alpha_1(\frac{1+y+ix}{2}) - \alpha_1(\frac{1-y+ix}{2}))} b_n</math>
rather than <math>a_n = N^{-y} n^y b_n</math>, where
:<math> \lambda :=  |\frac{\exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 H_{0,1}( \frac{1-y+ix}{2})}{\exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 H_{0,1}( \frac{1-y+ix}{2})}|.</math>


Two complications arise here compared with the toy model: firstly, <math>\sigma,a_n</math> now depend on <math>x</math> and not just on <math>N</math>, and secondly the <math>a_n</math> are not quite real-valued making it more difficult to apply Lemma 1.
We heuristically compute <math>H_t(x)</math> in the regime where <math>x</math> is large and <math>t</math> is large and negative with <math>|t|/x \asymp 1</math>.  We shall only be interested in the zeroes and so we discard any multiplicative factor which is non-zero: we write <math>X \sim Y</math> if X is equal (or approximately equal) to Y times something that is explicit and non-zero.


However we have good estimates for <math>\sigma,a_n</math> that depend only on <math>N</math>.  Note that
From equation (35) of the writeup we have
:<math> 2\pi N^2 \leq T' < 2\pi (N+1)^2</math>
and hence
:<math> x_N \leq x < x_{N+1}</math>
where
:<math> x_N := 4\pi N^2 - \frac{\pi t}{4}.</math>
To control <math>\sigma</math>, it suffices to obtain lower bounds because our criteria (both the triangle inequality and Lemma 1) become harder to satisfy when <math>\sigma</math> decreases.  We compute
:<math> \sigma = \frac{1+y}{2} + \frac{t}{2} \mathrm{Re}(\frac{1}{1+y+ix} + \frac{2}{-1+y+ix} + \frac{1}{2} \log \frac{1+y+ix}{4\pi})</math>
:<math> = \frac{1+y}{2} + \frac{t}{2} (\frac{1+y}{(1+y)^2+x^2} + \frac{-2+2y}{(-1+y)^2+x^2} + \frac{1}{2} \log \frac{|1+y+ix|}{4\pi})</math>
:<math> \geq \frac{1+y}{2} + \frac{t}{2} (\frac{1+y}{(-1+y)^2+x^2} + \frac{-2+2y}{(-1+y)^2+x^2} + \frac{1}{2} \log \frac{x}{4\pi})</math>
:<math> \geq \frac{1+y}{2} + \frac{t}{2} (\frac{3y-1}{(-1+y)^2+x^2} + \log N)</math>
:<math> \geq \frac{1+y}{2} + \frac{t}{2} \log N</math>
assuming that <math>y \geq 1/3</math>.  Hence we can actually just use the same value of <math>\sigma</math> as in the toy case.


Next we control <math>\lambda</math>.  Note that we can increase <math>\lambda</math> (thus multiplying <math>\sum_{n=1}^N \frac{a_n}{n^s}</math> by a quantity greater than 1) without affecting (2.1), so we just need upper bounds on <math>\lambda</math>.  We may factor
:<math>H_t(x) = \int_{\bf R} \frac{1}{8} \xi(\frac{1+ix}{2} + i |t|^{1/2} v) \frac{1}{\pi} e^{-v^2}\ dv \quad (3.1)</math>
:<math> \lambda = \exp( \frac{t}{4} \mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) + \mathrm{Re}( f(\frac{1-y+ix}{2}) - f(\frac{1+y+ix}{2} ) )</math>
:<math> \sim \int_{\bf R} \xi(\frac{1+ix}{2} + i |t|^{1/2} v) e^{-v^2}\ dv. \quad (3.2)</math>
where
:<math> f(s) := -\frac{s}{2} \log \pi + (\frac{s}{2} - \frac{1}{2}) \log \frac{s}{2} - \frac{s}{2}.</math>
By the mean value theorem, we have
:<math>\mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) = 2 y \alpha_1(s') \alpha'_1(s')</math>
for some <math>s'</math> between <math>\frac{1-y+ix}{2}</math> and <math>\frac{1+iy}{2}</math>.  We have
:<math>\alpha_1(s') = \frac{1}{2s'} + \frac{1}{s'-1} + \frac{1}{2} \log \frac{s'}{2\pi}</math>
:<math> = O_{\leq}(\frac{1}{x}) + O_{\leq}(\frac{1}{x/2}) + \frac{1}{2} \log \frac{|s'|}{2\pi} + O_{\leq}(\frac{\pi}{4})</math>
:<math> = O_{\leq}( \frac{\pi}{4} + \frac{3}{x_N}) + \frac{1}{2} O_{\leq}^{\mathbf{R}}( \log \frac{|1+y+ix_{N+1}|}{4\pi} )</math>
and
:<math>\alpha'_1(s') = -\frac{1}{2(s')^2} + \frac{1}{(s'-1)^2} + \frac{1}{2s'} </math>
:<math>= O_{\leq}(\frac{1}{x^2/2}) + O_{\leq(\frac{1/x^2/4}) + \frac{1}{2s'}</math>
:<math>= O_{\leq}(\frac{6}{x_N^2}) + \frac{1}{2s'}</math>
:<math>= O_{\leq}(\frac{6}{x_N^2}) + O_{\leq}( \frac{1}{x_N} ).</math>
Thus one has
:<math>\mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) = 2y O_{\leq}( (\frac{\pi}{4} + \frac{3}{x_N}) (\frac{1}{x_N} + \frac{6}{x_N^2}) )</math>
:<math> + 2y O_{\leq}( \log \frac{|1+y+ix_{N+1}|}{4\pi} (\frac{6}{x_N^2} + |\mathrm{Re} \frac{1}{2s'}|) )</math>
Now we have
:<math> \mathrm{Re} \frac{1}{2s'} = \frac{\mathrm{Re}(s')}{2|s'|^2} </math>
:<math> \leq \frac{1+y}{x^2}</math>
:<math> \leq \frac{1+y}{x_N^2};</math>
also
:<math>(\frac{\pi}{4} + \frac{3}{x_N}) (\frac{1}{x_N} + \frac{6}{x_N^2}) \leq \frac{\pi}{4} (1 + \frac{12/\pi}{x_N}) \frac{1}{x_N-6} </math>
:<math> \leq \frac{\pi}{4} ( \frac{1}{x_N-6} + \frac{12/\pi}{(x_N-6)^2} )</math>
:<math> \leq \frac{\pi}{4} \frac{1}{x_N - 6 - 12/\pi}.</math>
We conclude that
:<math>\mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) = O_{\leq}(\frac{\pi y}{2 (x_N - 6 - 12/\pi)} + \frac{2y(7+y)}{x_N^2} \log \frac{|1+y+ix_{N+1}|}{4\pi}).</math>


In a similar vein, from the mean value theorem we have
To cancel off an exponential decay factor in the <math>\xi</math> function, it is convenient to shift the v variable by <math>\pi |t|^{1/2}/8</math>, thus
:<math>\mathrm{Re}( f(\frac{1-y+ix}{2}) - f(\frac{1+y+ix}{2} ) = -y \mathrm{Re} f'(s'')</math>
:<math> H_t(x) \sim \int_{\bf R} \xi(\frac{1+ix}{2} + i |t|^{1/2} v - \pi i |t|/8) e^{-(v - \pi |t|^{1/2}/8)^2}\ dv \quad (3.3)</math>
for some <math>s''</math> between <math>\frac{1-y+ix}{2}</math> and <math>\frac{1+y+ix}{2}</math>.  We have
:<math> \sim \int_{\bf R} \xi(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) e^{-v^2 + \pi |t|^{1/2} v / 4}\ dv \quad (3.4)</math>
:<math> \mathrm{Re} f'(s'') = -\frac{1}{2} \log \pi + \frac{1}{2} \log \frac{|s''|}{2} - \mathrm{Re} \frac{1}{2s''} </math>
:<math> = \frac{1}{2} \log \frac{|s''|}{2\pi} + O_{\leq}(\frac{\mathm{Re}(s'')}{2|s''|^2}) </math>
:<math> \geq \log N + O_{\leq}(\frac{1+y}{x^2}) </math>
:<math> \geq \log N + O_{\leq}(\frac{1+y}{x_N^2}) </math>
and thus
:<math>\lambda \leq N^{-y} \exp( \frac{\pi y}{2 (x_N - 6 - 12/\pi)} + \frac{2y(7+y)}{x_N^2} \log \frac{|1+y+ix_{N+1}|}{4\pi} + \frac{y(1+y)}{x_N^2} ) </math>
:<math>\leq e^\delta N^{-y}</math>
where
where
:<math>\delta := \frac{\pi y}{2 (x_N - 6 - \frac{14+2y}{\pi})} + \frac{2y(7+y)}{x_N^2} \log \frac{|1+y+ix_{N+1}|}{4\pi} ) </math>
:<math> \tilde x := x - \pi |t|/4 = x + \frac{\pi t}{4}. \quad (3.5)</math>
Asymptotically we have
Now from the definition of <math>\xi</math> and the Stirling approximation we have
:<math> \delta = \frac{\pi y}{2 x_N} + O( \frac{\log x_N}{x_N^2} ) = O( \frac{1}{x_N} ).</math>
:<math> \xi(s) \sim M_0(s) \zeta(s)\quad (3.6)</math>
 
where <math>M_0</math> is defined in (6) of the writeup.  Thus
Now we control <math>\alpha_1(\frac{1+y+ix}{2}) - \alpha_1(\frac{1-y+ix}{2})</math>.  By the mean-value theorem we have
:<math> H_t(x) \sim \int_{\bf R} M_0(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) \zeta(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) e^{-v^2 + \pi |t|^{1/2} v / 4}\ dv.\quad (3.7)</math>
:<math>\alpha_1(\frac{1+y+ix}{2}) - \alpha_1(\frac{1-y+ix}{2}) = O_{\leq}( y |\alpha'_1(s''')|)</math>
By Taylor expansion we have
for some <math>s'''</math> between <math>\frac{1+y+ix}{2}</math> and <math>\frac{1-y+ix}{2}</math>.  As before we have
:<math> M_0(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) \sim M_0(\frac{1+i\tilde x}{2}) \exp( \alpha( \frac{1+i\tilde x}{2} ) i |t|^{1/2} v + \alpha'(\frac{1+i \tilde x}{2}) \frac{-|t| v^2}{2} )\quad (3.8)</math>
:<math>\alpha'_1(s''') = -\frac{1}{2(s''')^2} - \frac{1}{(s'''-1)^2} + \frac{1}{2s'''}</math>
:<math> \sim \exp( \alpha( \frac{1+i\tilde x}{2} ) i |t|^{1/2} v + \alpha'(\frac{1+i \tilde x}{2}) \frac{-|t| v^2}{2} )\quad (3.9)</math>
:<math> = O_{\leq}( \frac{1}{x^2/2} + \frac{1}{x^2/4} + \frac{1}{x} ) </math>
where <math>\alpha</math> is defined in equation (8) of the writeup.  We have the approximations
:<math> = O_{\leq}( \frac{1}{x_N} + \frac{6}{x_N^2} )</math>
:<math> \alpha(\frac{1+i\tilde x}{2} ) \approx \frac{1}{2} \log \frac{\tilde x}{4\pi} + \frac{i\pi}{4} \quad (3.10)</math>
:<math> = O_{\leq}( \frac{1}{x_N-6} ).</math>
 
We conclude that (after replacing <math>\lambda</math> with <math>e^\delta N^{-y}</math>
:<math>a_n = (n/N)^y \exp( \delta + O_{\leq}( \frac{t y \log n}{2(x_N-6)} ) ) b_n.</math>
The triangle inequality argument will thus give <math>A^{eff}+B^{eff}</math> non-zero as long as
:<math> \sum_{n=1}^N (1 + (n/N)^y \exp( \delta + \frac{t y \log n}{2(x_N-6)} ) ) b_n < 2.</math>
The situation with using Lemma 1 is a bit more complicated because <math>a_n</math> is not quite real.  We can write <math>a_n = e^\delta a_n^{toy} + O_{\leq}( e_n )</math> where
:<math> a_n^{toy} := (n/N)^y b_n</math>
and
and
:<math> e_n := e^\delta (n/N)^y (\exp( \frac{t y \log n}{2(x_N-6)} ) - 1) b_n</math>
:<math> \alpha'(\frac{1+i\tilde x}{2} ) \approx \frac{-i}{\tilde x} \quad (3.11)</math>
and then by Lemma 1 and the triangle inequality we can make <math>A^{eff}+B^{eff}</math> non-zero as long as
and hence
:<math> a_1^{toy} + \sum_{n=2}^N \frac{\max( |b_n-a_n^{toy}|, \frac{1+a_1^{toy}}{1-a_1^{toy}} |b_n + a_n^{toy}|}{n^\sigma} + \sum_{n=1}^N \frac{e_n}{n^\sigma} < 1.</math>
:<math> H_t(x) \sim \int_{\bf R} \exp( \frac{i |t|^{1/2} v}{2} \log \frac{\tilde x}{4\pi} - \pi |t|^{1/2} v/4 + i |t| v^2 / 2\tilde x) \zeta(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) e^{-v^2 + \pi |t|^{1/2} v / 4}\ dv.\quad (3.12)</math>
 
The two factors of <math>\exp( \pi |t|^{1/2} v/4 ) </math> cancel.  If we now write
== Controlling |H_t-A-B|/|B_0| ==
:<math>N := \sqrt{\frac{\tilde x}{4\pi}}\quad (3.13)</math>
 
and
As computed in [Effective bounds on H_t - second attempt], there is an effective bound
:<math>u := |t|/N^2 = 4\pi |t|/\tilde x,\quad (3.14)</math>  
:<math>|H_{eff} - A^{eff} - B^{eff}| \leq E_1 + E_2 + E_3</math>
we conclude that
where
:<math> H_t(x) \sim \int_{\bf R} \exp( i |t|^{1/2} v \log N + i u v^2 / 8 \pi) \zeta(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) e^{-v^2}\ dv.\quad (3.15)</math>
:<math>H_{0,1}(s) := \frac{s (s-1)}{2} \pi^{-s/2} \sqrt{2\pi} \exp( (\frac{s}{2} - \frac{1}{2}) \log \frac{s}{2} - \frac{s}{2} )</math>
If we formally write <math>\zeta(s) = \sum_n \frac{1}{n^s}</math> (ignoring convergence issues) we obtain
:<math> E_1 := \frac{1}{8 (T - 3.33)} \exp( \frac{t}{4} \mathrm{Re} \alpha_1(\frac{1-y+ix}{2})^2 ) |H_{0,1}(\frac{1-y+ix}{2})| \epsilon'(\frac{1-y+ix}{2}) </math>
:<math> H_t(x) \sim \sum_n \int_{\bf R} \exp( i |t|^{1/2} v \log N + i u v^2 / 8 \pi) n^{-\frac{1+i\tilde x}{2} - i |t|^{1/2} v} e^{-v^2}\ dv\quad (3.16)</math>
:<math> E_2 := \frac{1}{8 (T - 3.33)} \exp( \frac{t}{4} \mathrm{Re} \alpha_1(\frac{1+y+ix}{2})^2 ) |H_{0,1}(\frac{1+y+ix}{2})| \epsilon'(\frac{1+y+ix}{2}) </math>
:<math> \sim \sum_n \int_{\bf R} \exp( - i |t|^{1/2} v \log \frac{n}{N} + i u v^2 / 8 \pi -\frac{1+i\tilde x}{2} \log \frac{n}{N} ) e^{-v^2}\ dv\quad (3.17)</math>
:<math> E_3 := \frac{1}{8} \sqrt{\pi} \exp( -\frac{t \pi^2}{64} ) (T')^{3/2} e^{-\pi T/4}  \int_{-\infty}^\infty v(\sigma) w(\sigma) f(\sigma)\ d\sigma</math>
We can compute the <math>v</math> integral to obtain
:<math> \epsilon'(s) := \frac{1}{2} \sum_{n=1}^N \frac{1}{n^{\mathrm{Re}(s) + \frac{t \mathrm{Re} \alpha_1(s)}{2} - \frac{t}{4} \log n}}
:<math> H_t(x) \sim \sum_n \exp( - \frac{|t| \log^2 \frac{n}{N}}{4 (1 - iu / 8 \pi)} -\frac{1+i\tilde x}{2} \log \frac{n}{N}).\quad (3.18)</math>
\exp(\frac{1}{2(T-3.33)} (\frac{t^2}{4} |\alpha_1(s) - \log n|^2 + \frac{1}{3} + t))
Using the Taylor approximation
(\frac{t^2}{4} |\alpha_1(s) - \log n|^2 + \frac{1}{3} + t ) </math>
:<math> \log \frac{n}{N} \approx \frac{n-N}{N} - \frac{(n-N)^2}{2N^2} \quad (3.19)</math>
:<math> f(\sigma) := \frac{1}{2\sqrt{\pi t}} (e^{-(\sigma-(1-y)/2)^2/t} + e^{-(\sigma-(1+y)/2)^2/t}) \quad (4.1)</math>
and dropping some small terms, we obtain
:<math>w(\sigma) := (1 + \frac{\sigma^2}{(T'_0)^2})^{1/2} (1 + \frac{(1-\sigma)^2}{(T'_0)^2})^{1/2}  
:<math> H_t(x) \sim \sum_n \exp( - \frac{|t| (n-N)^2}{4 N^2 (1 - iu/8\pi)} -\frac{i\tilde x}{2} \frac{n-N}{N} + \frac{i \tilde x (n-N)^2}{4N^2} ).\quad (3.20)</math>
\exp( \frac{(\sigma-1)_+}{4} \log (1 + \frac{\sigma^2}{(T'_0)^2}) + (\frac{T'_0}{2} \arctan \frac{\sigma}{T'_0} - \frac{\sigma}{2}) 1_{\sigma < 0} + \frac{1}{12(T'_0 - 0.33)}) </math>
Writing <math>\tilde x = 4\pi N^2</math> and <math>|t| = u N^2</math>, this becomes
:<math>v(\sigma) := 1 + (0.400 \frac{9^\sigma}{a_0} + 0.346 \frac{2^{3\sigma/2}}{a_0^2}) 1_{\sigma \geq 0} + (9/10)^{\lceil -\sigma \rceil} \sum_{1 \leq k \leq 4-\sigma} (1.1)^k \frac{\Gamma(k/2)}{a_0^k} 1_{\sigma < 0} </math>
:<math> H_t(x) \sim \sum_n \exp( - \frac{2\pi u (n-N)^2}{8\pi - iu} -2 \pi i N(n-N) + \pi i (n-N)^2 ).\quad (3.21)</math>
:<math>a_0 := \sqrt{\frac{T'_0}{2\pi}}</math>
Writing
:<math> \alpha_1(s) := \frac{1}{2s} + \frac{1}{s-1} + \frac{1}{2} \log \frac{s}{2\pi} </math>
:<math> N(n-N) = \frac{1}{2} n^2 - \frac{1}{2} (n-N)^2 - \frac{1}{2} N^2 \quad (3.22)</math>
:<math> N := \lfloor \sqrt{ \frac{T'}{2\pi}} \rfloor</math>
we thus have
:<math> T' := \frac{x}{2} + \frac{\pi t}{8} </math>
:<math> H_t(x) \sim \sum_n \exp( - \frac{2 \pi u (n-N)^2}{8 \pi - iu} - \pi i n^2 + 2 \pi i (n-N)^2 )\quad (3.23)</math>
:<math> T'_0 := T_0 + \frac{\pi t}{8} </math>
:<math> \sim \sum_n \exp( \frac{16 \pi^2 i (n-N)^2}{8 \pi - iu} ) e^{\pi i n}\quad (3.24)</math>
 
:<math> \sim \theta_{01}( \frac{16 \pi N}{8\pi - iu}, \frac{16 \pi}{8\pi - iu} )\quad (3.25)</math>
 
where <math>\theta_{01}</math> is the theta function defined in [https://en.wikipedia.org/wiki/Theta_function#Auxiliary_functions this Wikipedia page].  Using the Jacobi identity we then have
Comparison between <math>H^{eff} = A^{eff}+B^{eff}</math>, <math>A'+B'</math>, and the effective error bound <math>E_1+E_2+E_3</math> on <math>H - H^{eff}</math> at some points of <math>x</math> [https://terrytao.wordpress.com/2018/02/12/polymath15-third-thread-computing-and-approximating-h_t/#comment-492893 source]:
:<math> H_t(x) \sim \theta_{10}(N, \frac{iu - 8\pi}{16 \pi} )\quad (3.26)</math>
 
:<math> \sim \theta( N + \frac{1}{2} \frac{iu - 8\pi}{16 \pi}, \frac{iu - 8\pi}{16 \pi})\quad (3.27)</math>  
{| border=1
:<math> \sim \sum_n \exp( - \pi i n(n+1) / 2 ) e^{2\pi i (n+1/2) N} e^{-u n(n+1)/16}.\quad (3.28)</math>
|-
As a sanity check, one can verify that the RHS is real-valued, just as <math>H_t(x)</math> is (by the functional equation).
! style="text-align:left;"| <math>x</math>
! <math>|H^{eff}/B'_0|</math>
! <math>|(A'+B')/B'_0|</math>
! <math>|(H^{eff}-(A'+B'))/B'_0|</math>
! <math>|(H^{eff}-(A'+B'))/B'_0| + |(E_1+E_2+E_3)/B'_0|</math>
|-
|10000
|0.52
|0.52
|0.0006
|0.039
|-
|12131
|1.28
|1.28
|0.0004
|0.033
|-
|15256
|0.97
|0.97
|0.0003
|0.027
|-
|18432
|0.68
|0.68
|0.0003
|0.023
|-
|20567
|0.98
|0.98
|0.0004
|0.022
|-
|30654
|1.93
|1.93
|0.0004
|0.016
|}
 
The <math>E_3</math> error dominates the other two [https://terrytao.wordpress.com/2018/02/12/polymath15-third-thread-computing-and-approximating-h_t/#comment-492922 source]:
 
{| border=1
|-
! style="text-align:left;"| <math>x</math>
! <math>\frac{E_3}{E_1+E_2}</math>
|-
|10000
|9.11
|-
|15000
|14.97
|-
|20000
|19.26
|-
|50000
|32.39
|-
|100000
|42.99
|-
|<math>10^7</math>
|87.23
|}
 
<math>A+B-C</math> is a good approximation to <math>H_t</math> [https://terrytao.wordpress.com/2018/02/12/polymath15-third-thread-computing-and-approximating-h_t/#comment-492695 source]
 
{| border=1
|-
! style="text-align:left;"| <math>x</math>
! <math>\frac{|H_t-(A+B-C)|}{|B_0|}</math>
|-
|160
|0.06993270565802375041
|-
|320
|0.006716674125965016299
|-
|480
|0.005332893070605698501
|-
|640
|0.003363431256036816251
|-
|800
|0.1548144749150572349
|-
|960
|0.03009229958121352990
|-
|1120
|0.004507664238680722472
|-
|1280
|0.002283591962997851167
|-
|1440
|0.01553727684468691873
|-
|1600
|0.001778051951547709718
|-
|1760
|0.02763769444052338578
|-
|1920
|0.002108779890256530964
|-
|2080
|0.02746770886040058927
|-
|2240
|0.001567020041379128455
|-
|2400
|0.01801417530687959747
|-
|2560
|0.001359561117436848149
|-
|2720
|0.008503327577240081269
|-
|2880
|0.001089253262122934826
|-
|3040
|0.003004181560093288747
|-
|3200
|0.02931455383125538672
|}
 
A closer look at the "spike" in error near <math>x=800 \approx 256 \pi \approx 804 </math>:
 
{| border=1
|-
! style="text-align:left;"| <math>x</math>
! <math>\frac{|H_t-(A+B-C)|}{|B_0|}</math>
|-
|622.035345
|0.003667321
|-
|631.460123
|0.004268055
|-
|640.884901
|0.003284407
|-
|650.309679
|0.004453589
|-
|659.734457
|0.003872174
|-
|669.159235
|0.005048162
|-
|678.584013
|0.005009254
|-
|688.008791
|0.007418686
|-
|697.433569
|0.007464541
|-
|706.858347
|0.010692337
|-
|716.283125
|0.012938629
|-
|725.707903
|0.017830524
|-
|735.132681
|0.022428596
|-
|744.557459
|0.030907876
|-
|753.982237
|0.040060298
|-
|763.407015
|0.053652069
|-
|772.831793
|0.071092824
|-
|782.256571
|0.094081856
|-
|791.681349
|0.123108726
|-
|801.106127
|0.159299234
|-
|810.530905
|0.002870724
|}
 
In practice <math>E_1/B^{eff}_0</math> is smaller than <math>E_2/B_{eff}_0</math>, which is mostly dominated by the first term in the sum which is close to <math>\frac{t^2}{16 x} \log^2 \frac{x}{4\pi}</math>:
 
{| border=1
|-
! style="text-align:left;"| <math>x</math>
! <math>E_1 / B^{eff}_0</math>
! <math>E_2 / B^{eff}_0</math>
! <math>\frac{t^2}{16x} \log^2 \frac{x}{4\pi}</math>
|-
|10^3
|<math>1.389 \times 10^{-3}</math>
|<math>2.341 \times 10^{-3}</math>
|<math>1.915 \times 10^{-4}</math>
|-
|10^4
|<math>1.438 \times 10^{-4}</math>
|<math>3.156 \times 10^{-4}</math>
|<math>4.461 \times 10^{-5}</math>
|-
|10^5
|<math>1.118 \times 10^{-5}</math>
|<math>3.574 \times 10^{-5}</math>
|<math>8.067 \times 10^{-6}</math>
|-
|10^6
|<math>7.328 \times 10^{-7}</math>
|<math>3.850 \times 10^{-6}</math>
|<math>1.273 \times 10^{-6}</math>
|-
|10^7
|<math>4.414 \times 10^{-8}</math>
|<math>4.197 \times 10^{-7}</math>
|<math>1.846 \times 10^{-7}</math>
|}
 


...
[[Category:Polymath15]]

Latest revision as of 22:26, 1 January 2019

We are initially focusing attention on the following

Test problem For [math]\displaystyle{ t=y=0.4 }[/math], can one prove that [math]\displaystyle{ H_t(x+iy) \neq 0 }[/math] for all [math]\displaystyle{ x \geq 0 }[/math]?

If we can show this, it is likely that (with the additional use of the argument principle, and some further information on the behaviour of [math]\displaystyle{ H_t(x+iy) }[/math] at [math]\displaystyle{ y=0.4 }[/math]) that one can show that [math]\displaystyle{ H_t(x+iy) \neq 0 }[/math] for all [math]\displaystyle{ y \geq 0.4 }[/math] as well. This would give a new upper bound

[math]\displaystyle{ \Lambda \leq 0.4 + \frac{1}{2} (0.4)^2 = 0.48 }[/math]

for the de Bruijn-Newman constant.

For very small values of [math]\displaystyle{ x }[/math] we expect to be able to establish this by direct calculation of [math]\displaystyle{ H_t(x+iy) }[/math]. For medium or large values, the strategy is to use a suitable approximation

[math]\displaystyle{ H_t(x+iy) \approx A + B }[/math]

for some relatively easily computable quantities [math]\displaystyle{ A = A_t(x+iy), B = B_t(x+iy) }[/math] (it may possibly be necessary to use a refined approximation [math]\displaystyle{ A+B-C }[/math] instead). The quantity [math]\displaystyle{ B }[/math] contains a non-zero main term [math]\displaystyle{ B_0 }[/math] which is expected to roughly dominate. To show [math]\displaystyle{ H_t(x+iy) }[/math] is non-zero, it would suffice to show that

[math]\displaystyle{ \frac{|H_t - A - B|}{|B_0|} \lt \frac{|A + B|}{|B_0|}. }[/math]

Thus one will seek upper bounds on the error [math]\displaystyle{ \frac{|H_t - A - B|}{|B_0|} }[/math] and lower bounds on [math]\displaystyle{ \frac{|A+B|}{|B_0|} }[/math] for various ranges of [math]\displaystyle{ x }[/math]. Numerically it seems that the RHS stays above 0.4 as soon as [math]\displaystyle{ x }[/math] is moderately large, while the LHS stays below 0.1, which looks promising for the rigorous arguments.

Choices of approximation

There are a number of slightly different approximations we have used in previous discussion. The first approximation was [math]\displaystyle{ A+B }[/math], where

[math]\displaystyle{ A := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{\exp(\frac{t}{16} \log^2 \frac{s+4}{2\pi n^2})}{n^s} }[/math]
[math]\displaystyle{ B := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-(1-s)/2} \Gamma((1-s)/2) \sum_{n=1}^N \frac{\exp(\frac{t}{16} \log^2 \frac{5-s}{2\pi n^2})}{n^{1-s}} }[/math]
[math]\displaystyle{ B_0 := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-(1-s)/2} \Gamma((1-s)/2) \exp( \frac{t}{16} \log^2 \frac{5-s}{2\pi} ) }[/math]
[math]\displaystyle{ s := \frac{1-y+ix}{2} }[/math]
[math]\displaystyle{ N := \lfloor \sqrt{\frac{\mathrm{Im} s}{2\pi}} \rfloor = \lfloor \sqrt{\frac{x}{4\pi}} \rfloor. }[/math]

There is also the refinement [math]\displaystyle{ A+B-C }[/math], where

[math]\displaystyle{ C:= \frac{1}{8} \exp(-\frac{t\pi^2}{64}) \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} (2\pi i N)^{s-1} \Psi( \frac{s}{2\pi i N}-N ) }[/math]
[math]\displaystyle{ \Psi(\alpha) := 2\pi \frac{\cos \pi(\frac{1}{2} \alpha^2 - \alpha - \frac{\pi}{8})}{\cos(\pi \alpha)} \exp( \frac{i \pi}{2} \alpha^2 - \frac{5 \pi i}{8}). }[/math]

The first approximation was modified slightly to [math]\displaystyle{ A'+B' }[/math], where

[math]\displaystyle{ A' := \frac{2}{8} \pi^{-s/2} \sqrt{2\pi} \exp( (\frac{s+4}{2}-\frac{1}{2}) \log \frac{s+4}{2} - \frac{s+4}{2}) \sum_{n=1}^N \frac{\exp(\frac{t}{16} \log^2 \frac{s+4}{2\pi n^2})}{n^s} }[/math]
[math]\displaystyle{ B' := \frac{2}{8} \pi^{-(1-s)/2} \sqrt{2\pi} \exp( (\frac{5-s}{2}-\frac{1}{2}) \log \frac{5-s}{2} - \frac{5-s}{2}) \sum_{n=1}^N \frac{\exp(\frac{t}{16} \log^2 \frac{5-s}{2\pi n^2})}{n^{1-s}} }[/math]
[math]\displaystyle{ B'_0 := \frac{2}{8} \pi^{-(1-s)/2} \sqrt{2\pi} \exp( (\frac{5-s}{2}-\frac{1}{2}) \log \frac{5-s}{2} - \frac{5-s}{2}) \exp( \frac{t}{16} \log^2 \frac{5-s}{2\pi} ) }[/math]
[math]\displaystyle{ s := \frac{1-y+ix}{2} }[/math]
[math]\displaystyle{ N := \lfloor \sqrt{\frac{\mathrm{Im} s}{2\pi}} \rfloor = \lfloor \sqrt{\frac{x}{4\pi}} \rfloor. }[/math]

In Effective bounds on H_t - second approach, a more refined approximation [math]\displaystyle{ A^{eff} + B^{eff} }[/math] was introduced:

[math]\displaystyle{ A^{eff} := \frac{1}{8} \exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 ) H_{0,1}(\frac{1-y+ix}{2}) \sum_{n=1}^N \frac{1}{n^{\frac{1-y+ix}{2} + \frac{t \alpha_1(\frac{1-y+ix}{2})}{2} - \frac{t}{4} \log n}} }[/math]
[math]\displaystyle{ B^{eff} := \frac{1}{8} \exp( \frac{t}{4} \overline{\alpha_1(\frac{1+y+ix}{2})}^2 ) \overline{H_{0,1}(\frac{1+y+ix}{2})} \sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2} + \frac{t \overline{\alpha_1(\frac{1+y+ix}{2})}}{2} - \frac{t}{4} \log n}} }[/math]
[math]\displaystyle{ B^{eff}_0 := \frac{1}{8} \exp( \frac{t}{4} \overline{\alpha_1(\frac{1+y+ix}{2})}^2 ) \overline{H_{0,1}(\frac{1+y+ix}{2})} }[/math]
[math]\displaystyle{ H_{0,1}(s) := \frac{s (s-1)}{2} \pi^{-s/2} \sqrt{2\pi} \exp( (\frac{s}{2} - \frac{1}{2}) \log \frac{s}{2} - \frac{s}{2} ) }[/math]
[math]\displaystyle{ \alpha_1(s) := \frac{1}{2s} + \frac{1}{s-1} + \frac{1}{2} \log \frac{s}{2\pi} }[/math]
[math]\displaystyle{ N := \lfloor \sqrt{ \frac{T'}{2\pi}} \rfloor }[/math]
[math]\displaystyle{ T' := \frac{x}{2} + \frac{\pi t}{8}. }[/math]

There is a refinement [math]\displaystyle{ A^{eff}+B^{eff}-C^{eff} }[/math], where

[math]\displaystyle{ C^{eff} := \frac{1}{8} \exp( \frac{t\pi^2}{64}) \frac{s'(s'-1)}{2} (-1)^N ( \pi^{-s'/2} \Gamma(s'/2) a^{-\sigma} C_0(p) U + \pi^{-(1-s')/2} \Gamma((1-s')/2) a^{-(1-\sigma)} \overline{C_0(p)} \overline{U}) }[/math]
[math]\displaystyle{ s' := \frac{1-y}{2} + iT' = \frac{1-y+ix}{2} + \frac{\pi i t}{8} }[/math]
[math]\displaystyle{ a := \sqrt{\frac{T'}{2\pi}} }[/math]
[math]\displaystyle{ p := 1 - 2(a-N) }[/math]
[math]\displaystyle{ \sigma := \mathrm{Re} s' = \frac{1-y}{2} }[/math]
[math]\displaystyle{ U := \exp( -i (\frac{T'}{2} \log \frac{T'}{2\pi} - \frac{T'}{2} - \frac{\pi}{8} )) }[/math]
[math]\displaystyle{ C_0(p) := \frac{ \exp( \pi i (p^2/2 + 3/8) )- i \sqrt{2} \cos(\pi p/2)}{2 \cos(\pi p)}. }[/math]

One can also replace [math]\displaystyle{ C^{eff} }[/math] by the very slightly different quantity

[math]\displaystyle{ \tilde C^{eff} :=\frac{2 e^{-\pi i y/8}}{8} \exp( \frac{t\pi^2}{64}) (-1)^N \mathrm{Re}( H_{0,1}(iT') C_0(p) U e^{\pi i/8} ). }[/math]

Finally, a simplified approximation is [math]\displaystyle{ A^{toy} + B^{toy} }[/math], where

[math]\displaystyle{ A^{toy} := B^{toy}_0 \exp(i ((\frac{x}{2} + \frac{\pi t}{8}) \log \frac{x}{4\pi} - \frac{x}{2} - \frac{\pi}{4} )) N^{-y} \sum_{n=1}^N \frac{1}{n^{\frac{1-y+ix}{2} + \frac{t}{4} \log \frac{N^2}{n} + \pi i t/8}} }[/math]
[math]\displaystyle{ B^{toy} := B^{toy}_0 \sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2} + \frac{t}{4} \log \frac{N^2}{n} - \pi i t/8}} }[/math]
[math]\displaystyle{ B^{toy}_0 := \frac{\sqrt{2}}{4} \pi^2 N^{\frac{7+y}{2}} \exp( i (-\frac{x}{4} \log \frac{x}{4\pi} + \frac{x}{4} + \frac{9-y}{8} \pi) + \frac{t}{16} (\log \frac{x}{4\pi} - \frac{\pi i}{2})^2 ) e^{-\pi x/8} }[/math]
[math]\displaystyle{ N := \lfloor \sqrt{\frac{x}{4\pi}} \rfloor. }[/math]

Here is a table comparing the size of the various main terms:

[math]\displaystyle{ x }[/math] [math]\displaystyle{ B_0 }[/math] [math]\displaystyle{ B'_0 }[/math] [math]\displaystyle{ B^{eff}_0 }[/math] [math]\displaystyle{ B^{toy}_0 }[/math]
[math]\displaystyle{ 10^3 }[/math] [math]\displaystyle{ (3.4405 + 3.5443 i) \times 10^{-167} }[/math] [math]\displaystyle{ (3.4204 + 3.5383 i) \times 10^{-167} }[/math] [math]\displaystyle{ (3.4426 + 3.5411 i) \times 10^{-167} }[/math] [math]\displaystyle{ (2.3040 + 2.3606 i) \times 10^{-167} }[/math]
[math]\displaystyle{ 10^4 }[/math] [math]\displaystyle{ (-1.1843 - 7.7882 i) \times 10^{-1700} }[/math] [math]\displaystyle{ (-1.1180 - 7.7888 i) \times 10^{-1700} }[/math] [math]\displaystyle{ (-1.1185 - 7.7879 i) \times 10^{-1700} }[/math] [math]\displaystyle{ (-1.1155 - 7.5753 i) \times 10^{-1700} }[/math]
[math]\displaystyle{ 10^5 }[/math] [math]\displaystyle{ (-7.6133 + 2.5065 i) * 10^{-17047} }[/math] [math]\displaystyle{ (-7.6134 + 2.5060 i) * 10^{-17047} }[/math] [math]\displaystyle{ (-7.6134 + 2.5059 i) * 10^{-17047} }[/math] [math]\displaystyle{ (-7.5483 + 2.4848 i) * 10^{-17047} }[/math]
[math]\displaystyle{ 10^6 }[/math] [math]\displaystyle{ (-3.1615 - 7.7093 i) * 10^{-170537} }[/math] [math]\displaystyle{ (-3.1676 - 7.7063 i) * 10^{-170537} }[/math] [math]\displaystyle{ (-3.1646 - 7.7079 i) * 10^{-170537} }[/math] [math]\displaystyle{ (-3.1590 - 7.6898 i) * 10^{-170537} }[/math]
[math]\displaystyle{ 10^7 }[/math] [math]\displaystyle{ (2.1676 - 9.6330 i) * 10^{-1705458} }[/math] [math]\displaystyle{ (2.1711 - 9.6236 i) * 10^{-1705458} }[/math] [math]\displaystyle{ (2.1571 - 9.6329 i) * 10^{-1705458} }[/math] [math]\displaystyle{ (2.2566 - 9.6000 i) * 10^{-1705458} }[/math]

Here some typical values of [math]\displaystyle{ B/B_0 }[/math] (note that [math]\displaystyle{ B/B_0 }[/math] and [math]\displaystyle{ B'/B'_0 }[/math] are identical):

[math]\displaystyle{ x }[/math] [math]\displaystyle{ B/B_0 }[/math] [math]\displaystyle{ B'/B'_0 }[/math] [math]\displaystyle{ B^{eff}/B^{eff}_0 }[/math] [math]\displaystyle{ B^{toy}/B^{toy}_0 }[/math]
[math]\displaystyle{ 10^3 }[/math] [math]\displaystyle{ 0.7722 + 0.6102 i }[/math] [math]\displaystyle{ 0.7722 + 0.6102 i }[/math] [math]\displaystyle{ 0.7733 + 0.6101 i }[/math] [math]\displaystyle{ 0.7626 + 0.6192 i }[/math]
[math]\displaystyle{ 10^4 }[/math] [math]\displaystyle{ 0.7434 - 0.0126 i }[/math] [math]\displaystyle{ 0.7434 - 0.0126 i }[/math] [math]\displaystyle{ 0.7434 - 0.0126 i }[/math] [math]\displaystyle{ 0.7434 - 0.0124 i }[/math]
[math]\displaystyle{ 10^5 }[/math] [math]\displaystyle{ 1.1218 - 0.3211 i }[/math] [math]\displaystyle{ 1.1218 - 0.3211 i }[/math] [math]\displaystyle{ 1.1218 - 0.3211 i }[/math] [math]\displaystyle{ 1.1219 - 0.3213 i }[/math]
[math]\displaystyle{ 10^6 }[/math] [math]\displaystyle{ 1.3956 - 0.5682 i }[/math] [math]\displaystyle{ 1.3956 - 0.5682 i }[/math] [math]\displaystyle{ 1.3955 - 0.5682 i }[/math] [math]\displaystyle{ 1.3956 - 0.5683 i }[/math]
[math]\displaystyle{ 10^7 }[/math] [math]\displaystyle{ 1.6400 + 0.0198 i }[/math] [math]\displaystyle{ 1.6400 + 0.0198 i }[/math] [math]\displaystyle{ 1.6401 + 0.0198 i }[/math] [math]\displaystyle{ 1.6400 - 0.0198 i }[/math]

Here some typical values of [math]\displaystyle{ A/B_0 }[/math], which seems to be about an order of magnitude smaller than [math]\displaystyle{ B/B_0 }[/math] in many cases:

[math]\displaystyle{ x }[/math] [math]\displaystyle{ A/B_0 }[/math] [math]\displaystyle{ A'/B'_0 }[/math] [math]\displaystyle{ A^{eff}/B^{eff}_0 }[/math] [math]\displaystyle{ A^{toy}/B^{toy}_0 }[/math]
[math]\displaystyle{ 10^3 }[/math] [math]\displaystyle{ -0.3856 - 0.0997 i }[/math] [math]\displaystyle{ -0.3857 - 0.0953 i }[/math] [math]\displaystyle{ -0.3854 - 0.1002 i }[/math] [math]\displaystyle{ -0.4036 - 0.0968 i }[/math]
[math]\displaystyle{ 10^4 }[/math] [math]\displaystyle{ -0.2199 - 0.0034 i }[/math] [math]\displaystyle{ -0.2199 - 0.0036 i }[/math] [math]\displaystyle{ -0.2199 - 0.0033 i }[/math] [math]\displaystyle{ -0.2208 - 0.0033 i }[/math]
[math]\displaystyle{ 10^5 }[/math] [math]\displaystyle{ 0.1543 + 0.1660 i }[/math] [math]\displaystyle{ 0.1543 + 0.1660 i }[/math] [math]\displaystyle{ 0.1543 + 0.1660 i }[/math] [math]\displaystyle{ 0.1544 + 0.1663 i }[/math]
[math]\displaystyle{ 10^6 }[/math] [math]\displaystyle{ -0.1013 - 0.1887 i }[/math] [math]\displaystyle{ -0.1010 - 0.1889 i }[/math] [math]\displaystyle{ -0.1011 - 0.1890 i }[/math] [math]\displaystyle{ -0.1012 - 0.1888 i }[/math]
[math]\displaystyle{ 10^7 }[/math] [math]\displaystyle{ -0.1018 + 0.1135 i }[/math] [math]\displaystyle{ -0.1022 + 0.1133 i }[/math] [math]\displaystyle{ -0.1025 + 0.1128 i }[/math] [math]\displaystyle{ -0.0986 + 0.1163 i }[/math]

Here some typical values of [math]\displaystyle{ C/B_0 }[/math], which is significantly smaller than either [math]\displaystyle{ A/B_0 }[/math] or [math]\displaystyle{ B/B_0 }[/math]:


[math]\displaystyle{ x }[/math] [math]\displaystyle{ C/B_0 }[/math] [math]\displaystyle{ C^{eff}/B^{eff}_0 }[/math]
[math]\displaystyle{ 10^3 }[/math] [math]\displaystyle{ -0.1183 + 0.0697i }[/math] [math]\displaystyle{ -0.0581 + 0.0823 i }[/math]
[math]\displaystyle{ 10^4 }[/math] [math]\displaystyle{ -0.0001 - 0.0184 i }[/math] [math]\displaystyle{ -0.0001 - 0.0172 i }[/math]
[math]\displaystyle{ 10^5 }[/math] [math]\displaystyle{ -0.0033 - 0.0005i }[/math] [math]\displaystyle{ -0.0031 - 0.0005i }[/math]
[math]\displaystyle{ 10^6 }[/math] [math]\displaystyle{ -0.0001 - 0.0006 i }[/math] [math]\displaystyle{ -0.0001 - 0.0006 i }[/math]
[math]\displaystyle{ 10^7 }[/math] [math]\displaystyle{ -0.0000 - 0.0001 i }[/math] [math]\displaystyle{ -0.0000 - 0.0001 i }[/math]

Some values of [math]\displaystyle{ H_t }[/math] and its approximations at small values of [math]\displaystyle{ x }[/math] source source:

[math]\displaystyle{ x }[/math] [math]\displaystyle{ H_t }[/math] [math]\displaystyle{ A+B }[/math] [math]\displaystyle{ A'+B' }[/math] [math]\displaystyle{ A^{eff}+B^{eff} }[/math] [math]\displaystyle{ A^{toy}+B^{toy} }[/math] [math]\displaystyle{ A+B-C }[/math] [math]\displaystyle{ A^{eff}+B^{eff}-C^{eff} }[/math]
[math]\displaystyle{ 10 }[/math] [math]\displaystyle{ (3.442 - 0.168 i) \times 10^{-2} }[/math] 0 0 0 N/A N/A [math]\displaystyle{ (3.501 - 0.316 i) \times 10^{-2} }[/math]
[math]\displaystyle{ 30 }[/math] [math]\displaystyle{ (-1.000 - 0.071 i) \times 10^{-4} }[/math] [math]\displaystyle{ (-0.650 - 0.188 i) \times 10^{-4} }[/math] [math]\displaystyle{ (-0.211 - 0.192 i) \times 10^{-4} }[/math] [math]\displaystyle{ (-0.670 - 0.114 i) \times 10^{-4} }[/math] [math]\displaystyle{ (-0.136 + 0.021 i) \times 10^{-4} }[/math] [math]\displaystyle{ (-1.227 - 0.058 i) \times 10^{-4} }[/math] [math]\displaystyle{ (-1.032 - 0.066 i) \times 10^{-4} }[/math]
[math]\displaystyle{ 100 }[/math] [math]\displaystyle{ (6.702 + 3.134 i) \times 10^{-16} }[/math] [math]\displaystyle{ (2.890 + 3.667 i) \times 10^{-16} }[/math] [math]\displaystyle{ (2.338 + 3.742 i) \times 10^{-16} }[/math] [math]\displaystyle{ (2.955 + 3.650 i) \times 10^{-16} }[/math] [math]\displaystyle{ (0.959 + 0.871 i) \times 10^{-16} }[/math] [math]\displaystyle{ (6.158 + 12.226 i) \times 10^{-16} }[/math] [math]\displaystyle{ (6.763 + 3.074 i) \times 10^{-16} }[/math]
[math]\displaystyle{ 300 }[/math] [math]\displaystyle{ (-4.016 - 1.401 i) \times 10^{-49} }[/math] [math]\displaystyle{ (-5.808 - 1.140 i) \times 10^{-49} }[/math] [math]\displaystyle{ (-5.586 - 1.228 i) \times 10^{-49} }[/math] [math]\displaystyle{ (-5.824 - 1.129 i) \times 10^{-49} }[/math] [math]\displaystyle{ (-2.677 - 0.327 i) \times 10^{-49} }[/math] [math]\displaystyle{ (-3.346 + 6.818 i) \times 10^{-49} }[/math] [math]\displaystyle{ (-4.032 - 1.408 i) \times 10^{-49} }[/math]
[math]\displaystyle{ 1000 }[/math] [math]\displaystyle{ (0.015 + 3.051 i) \times 10^{-167} }[/math] [math]\displaystyle{ (-0.479 + 3.126 i) \times 10^{-167} }[/math] [math]\displaystyle{ (-0.516 + 3.135 i) \times 10^{-167} }[/math] [math]\displaystyle{ (-0.474 + 3.124 i) \times 10^{-167} }[/math] [math]\displaystyle{ (-0.406 + 2.051 i) \times 10^{-167} }[/math] [math]\displaystyle{ (0.175 + 3.306 i) \times 10^{-167} }[/math] [math]\displaystyle{ (0.017 + 3.047 i) \times 10^{-167} }[/math]
[math]\displaystyle{ 3000 }[/math] [math]\displaystyle{ (-1.144+ 1.5702 i) 10^{-507} }[/math] [math]\displaystyle{ (-1.039+ 1.5534 i) 10^{-507} }[/math] [math]\displaystyle{ (-1.039+ 1.5552 i) 10^{-507} }[/math] [math]\displaystyle{ (-1.038+ 1.5535 i) 10^{-507} }[/math] [math]\displaystyle{ (-0.925+ 1.3933 i) 10^{-507} }[/math] [math]\displaystyle{ (-1.155+ 1.5686 i) 10^{-507} }[/math] [math]\displaystyle{ (-1.144+ 1.5701 i) 10^{-507} }[/math]
[math]\displaystyle{ 10000 }[/math] [math]\displaystyle{ (-0.558 - 4.088 i) \times 10^{-1700} }[/math] [math]\displaystyle{ (-0.692 - 4.067 i) \times 10^{-1700} }[/math] [math]\displaystyle{ (-0.687 - 4.067 i) \times 10^{-1700} }[/math] [math]\displaystyle{ (-0.692 - 4.066 i) \times 10^{-1700} }[/math] [math]\displaystyle{ (-0.673 - 3.948 i) \times 10^{-1700} }[/math] [math]\displaystyle{ (-0.548 - 4.089 i) \times 10^{-1700} }[/math] [math]\displaystyle{ (-0.558 - 4.088 i) \times 10^{-1700} }[/math]
[math]\displaystyle{ 30000 }[/math] [math]\displaystyle{ (3.160 - 6.737) \times 10^{-5110} }[/math] [math]\displaystyle{ (3.065 - 6.722) \times 10^{-5100} }[/math] [math]\displaystyle{ (3.066 - 6.722) \times 10^{-5100} }[/math] [math]\displaystyle{ (3.065 - 6.722) \times 10^{-5100} }[/math] [math]\displaystyle{ (2.853 - 6.286) \times 10^{-5100} }[/math] [math]\displaystyle{ (3.170 - 6.733) \times 10^{-5100} }[/math] [math]\displaystyle{ (3.160 - 6.737) \times 10^{-5100} }[/math]

Controlling |A+B|/|B_0|

See Controlling A+B/B_0.

Mesh evaluations of [math]\displaystyle{ A^{eff}+B^{eff}/B^{eff}_0 }[/math] in the ranges

Here is a table of analytic lower bounds for [math]\displaystyle{ A^{eff}+B^{eff}/B^{eff}_0 }[/math] for [math]\displaystyle{ 3 \leq N \leq 2000 }[/math].

Controlling |H_t-A-B|/|B_0|

See Controlling H_t-A-B/B_0.

Here is a table on bounds on error terms [math]\displaystyle{ E_1/B^{eff}_0, E_2/B^{eff}_0, E_3^*/B^{eff}_0 }[/math] for N=3 to 2000. Here is a table with some sharpened estimates from the PDF writeup.

Here is a graph depicting [math]\displaystyle{ |H_t-A^{eff}-B^{eff}/B_0^{eff}| }[/math] and [math]\displaystyle{ E_1+E_2+E_3^*/|B_0^{eff}| }[/math] for [math]\displaystyle{ x \leq 1600 }[/math].

Small values of x

Tables of [math]\displaystyle{ H_t(x+iy) }[/math] for small values of [math]\displaystyle{ x }[/math]:


Here are some snapshots of [math]\displaystyle{ H_t/B^{eff}_0 }[/math].

In this range we will need Bounding the derivative of H_t or Bounding the derivative of H_t - second approach or Bounding the derivative of H_t - third approach.

Tables of [math]\displaystyle{ H'_t(x+iy) }[/math]:

Here is a table of [math]\displaystyle{ x }[/math], pari/gp prec, [math]\displaystyle{ H_{t}, H^{'}_{t}, |H_{t}|, |H^{'}_{t}|, \frac{|H_{t}|}{|B_{0}^{eff}|}, \frac{|H^{'}_{t}|}{|B_{0}^{eff}|} }[/math] for x=0 to x=30 with step size 0.01.


Here is a plot of [math]\displaystyle{ H_t/B_0 }[/math] for a rectangle [math]\displaystyle{ \{x+iy: 0 \leq x \leq 300; 0.4 \leq y \leq 0.45\} }[/math]. Here is an adaptive mesh plot; here is a closeup near the origin.

Here is a script for verifying the absence of zeroes of [math]\displaystyle{ H_t }[/math] in a rectangle. It can eliminate zeros in the rectangle [math]\displaystyle{ \{0 \leq x \leq 1000, 0.4 \leq y \leq 0.45\} }[/math] when t = 0.4.


Large negative values of [math]\displaystyle{ t }[/math]

See also Second attempt at computing H_t(x) for negative t.

We heuristically compute [math]\displaystyle{ H_t(x) }[/math] in the regime where [math]\displaystyle{ x }[/math] is large and [math]\displaystyle{ t }[/math] is large and negative with [math]\displaystyle{ |t|/x \asymp 1 }[/math]. We shall only be interested in the zeroes and so we discard any multiplicative factor which is non-zero: we write [math]\displaystyle{ X \sim Y }[/math] if X is equal (or approximately equal) to Y times something that is explicit and non-zero.

From equation (35) of the writeup we have

[math]\displaystyle{ H_t(x) = \int_{\bf R} \frac{1}{8} \xi(\frac{1+ix}{2} + i |t|^{1/2} v) \frac{1}{\pi} e^{-v^2}\ dv \quad (3.1) }[/math]
[math]\displaystyle{ \sim \int_{\bf R} \xi(\frac{1+ix}{2} + i |t|^{1/2} v) e^{-v^2}\ dv. \quad (3.2) }[/math]

To cancel off an exponential decay factor in the [math]\displaystyle{ \xi }[/math] function, it is convenient to shift the v variable by [math]\displaystyle{ \pi |t|^{1/2}/8 }[/math], thus

[math]\displaystyle{ H_t(x) \sim \int_{\bf R} \xi(\frac{1+ix}{2} + i |t|^{1/2} v - \pi i |t|/8) e^{-(v - \pi |t|^{1/2}/8)^2}\ dv \quad (3.3) }[/math]
[math]\displaystyle{ \sim \int_{\bf R} \xi(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) e^{-v^2 + \pi |t|^{1/2} v / 4}\ dv \quad (3.4) }[/math]

where

[math]\displaystyle{ \tilde x := x - \pi |t|/4 = x + \frac{\pi t}{4}. \quad (3.5) }[/math]

Now from the definition of [math]\displaystyle{ \xi }[/math] and the Stirling approximation we have

[math]\displaystyle{ \xi(s) \sim M_0(s) \zeta(s)\quad (3.6) }[/math]

where [math]\displaystyle{ M_0 }[/math] is defined in (6) of the writeup. Thus

[math]\displaystyle{ H_t(x) \sim \int_{\bf R} M_0(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) \zeta(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) e^{-v^2 + \pi |t|^{1/2} v / 4}\ dv.\quad (3.7) }[/math]

By Taylor expansion we have

[math]\displaystyle{ M_0(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) \sim M_0(\frac{1+i\tilde x}{2}) \exp( \alpha( \frac{1+i\tilde x}{2} ) i |t|^{1/2} v + \alpha'(\frac{1+i \tilde x}{2}) \frac{-|t| v^2}{2} )\quad (3.8) }[/math]
[math]\displaystyle{ \sim \exp( \alpha( \frac{1+i\tilde x}{2} ) i |t|^{1/2} v + \alpha'(\frac{1+i \tilde x}{2}) \frac{-|t| v^2}{2} )\quad (3.9) }[/math]

where [math]\displaystyle{ \alpha }[/math] is defined in equation (8) of the writeup. We have the approximations

[math]\displaystyle{ \alpha(\frac{1+i\tilde x}{2} ) \approx \frac{1}{2} \log \frac{\tilde x}{4\pi} + \frac{i\pi}{4} \quad (3.10) }[/math]

and

[math]\displaystyle{ \alpha'(\frac{1+i\tilde x}{2} ) \approx \frac{-i}{\tilde x} \quad (3.11) }[/math]

and hence

[math]\displaystyle{ H_t(x) \sim \int_{\bf R} \exp( \frac{i |t|^{1/2} v}{2} \log \frac{\tilde x}{4\pi} - \pi |t|^{1/2} v/4 + i |t| v^2 / 2\tilde x) \zeta(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) e^{-v^2 + \pi |t|^{1/2} v / 4}\ dv.\quad (3.12) }[/math]

The two factors of [math]\displaystyle{ \exp( \pi |t|^{1/2} v/4 ) }[/math] cancel. If we now write

[math]\displaystyle{ N := \sqrt{\frac{\tilde x}{4\pi}}\quad (3.13) }[/math]

and

[math]\displaystyle{ u := |t|/N^2 = 4\pi |t|/\tilde x,\quad (3.14) }[/math]

we conclude that

[math]\displaystyle{ H_t(x) \sim \int_{\bf R} \exp( i |t|^{1/2} v \log N + i u v^2 / 8 \pi) \zeta(\frac{1+i\tilde x}{2} + i |t|^{1/2} v) e^{-v^2}\ dv.\quad (3.15) }[/math]

If we formally write [math]\displaystyle{ \zeta(s) = \sum_n \frac{1}{n^s} }[/math] (ignoring convergence issues) we obtain

[math]\displaystyle{ H_t(x) \sim \sum_n \int_{\bf R} \exp( i |t|^{1/2} v \log N + i u v^2 / 8 \pi) n^{-\frac{1+i\tilde x}{2} - i |t|^{1/2} v} e^{-v^2}\ dv\quad (3.16) }[/math]
[math]\displaystyle{ \sim \sum_n \int_{\bf R} \exp( - i |t|^{1/2} v \log \frac{n}{N} + i u v^2 / 8 \pi -\frac{1+i\tilde x}{2} \log \frac{n}{N} ) e^{-v^2}\ dv\quad (3.17) }[/math]

We can compute the [math]\displaystyle{ v }[/math] integral to obtain

[math]\displaystyle{ H_t(x) \sim \sum_n \exp( - \frac{|t| \log^2 \frac{n}{N}}{4 (1 - iu / 8 \pi)} -\frac{1+i\tilde x}{2} \log \frac{n}{N}).\quad (3.18) }[/math]

Using the Taylor approximation

[math]\displaystyle{ \log \frac{n}{N} \approx \frac{n-N}{N} - \frac{(n-N)^2}{2N^2} \quad (3.19) }[/math]

and dropping some small terms, we obtain

[math]\displaystyle{ H_t(x) \sim \sum_n \exp( - \frac{|t| (n-N)^2}{4 N^2 (1 - iu/8\pi)} -\frac{i\tilde x}{2} \frac{n-N}{N} + \frac{i \tilde x (n-N)^2}{4N^2} ).\quad (3.20) }[/math]

Writing [math]\displaystyle{ \tilde x = 4\pi N^2 }[/math] and [math]\displaystyle{ |t| = u N^2 }[/math], this becomes

[math]\displaystyle{ H_t(x) \sim \sum_n \exp( - \frac{2\pi u (n-N)^2}{8\pi - iu} -2 \pi i N(n-N) + \pi i (n-N)^2 ).\quad (3.21) }[/math]

Writing

[math]\displaystyle{ N(n-N) = \frac{1}{2} n^2 - \frac{1}{2} (n-N)^2 - \frac{1}{2} N^2 \quad (3.22) }[/math]

we thus have

[math]\displaystyle{ H_t(x) \sim \sum_n \exp( - \frac{2 \pi u (n-N)^2}{8 \pi - iu} - \pi i n^2 + 2 \pi i (n-N)^2 )\quad (3.23) }[/math]
[math]\displaystyle{ \sim \sum_n \exp( \frac{16 \pi^2 i (n-N)^2}{8 \pi - iu} ) e^{\pi i n}\quad (3.24) }[/math]
[math]\displaystyle{ \sim \theta_{01}( \frac{16 \pi N}{8\pi - iu}, \frac{16 \pi}{8\pi - iu} )\quad (3.25) }[/math]

where [math]\displaystyle{ \theta_{01} }[/math] is the theta function defined in this Wikipedia page. Using the Jacobi identity we then have

[math]\displaystyle{ H_t(x) \sim \theta_{10}(N, \frac{iu - 8\pi}{16 \pi} )\quad (3.26) }[/math]
[math]\displaystyle{ \sim \theta( N + \frac{1}{2} \frac{iu - 8\pi}{16 \pi}, \frac{iu - 8\pi}{16 \pi})\quad (3.27) }[/math]
[math]\displaystyle{ \sim \sum_n \exp( - \pi i n(n+1) / 2 ) e^{2\pi i (n+1/2) N} e^{-u n(n+1)/16}.\quad (3.28) }[/math]

As a sanity check, one can verify that the RHS is real-valued, just as [math]\displaystyle{ H_t(x) }[/math] is (by the functional equation).