Dynamics of zeros: Difference between revisions

From Polymath Wiki
Jump to navigationJump to search
No edit summary
Line 113: Line 113:


'''Proof''' (Sketch)  By the previous lemma, all the zeroes <math>x_j+iy_j</math> of <math>H_{t}</math> for <math>0 < t < t_0</math>, <math>0 \leq x \leq T</math> and <math>y > \varepsilon</math> obey <math>\partial_t y_j < 0</math>.  By a continuity argument this shows in fact that there are no such zeroes.  In particular all zeroes of <math>H_{t_0}</math> have imaginary part at most <math>\varepsilon</math>, and the claim then follows from Theorem 1. <math>\Box</math>
'''Proof''' (Sketch)  By the previous lemma, all the zeroes <math>x_j+iy_j</math> of <math>H_{t}</math> for <math>0 < t < t_0</math>, <math>0 \leq x \leq T</math> and <math>y > \varepsilon</math> obey <math>\partial_t y_j < 0</math>.  By a continuity argument this shows in fact that there are no such zeroes.  In particular all zeroes of <math>H_{t_0}</math> have imaginary part at most <math>\varepsilon</math>, and the claim then follows from Theorem 1. <math>\Box</math>
== Derivative analysis ==
Let <math>X</math> be a large quantity, and consider the expression
:<math>S(t) := \sum_{k: x_k, y_k > 0} y_k e^{-x_k/X}</math>
where <math>x_k+iy_k</math> ranges over the zeroes of <math>H_t</math>.  The exponential decay factor should allow us to justify termwise differentiation etc., so we obtain
:<math>\partial_t S(t) := \sum_{k: x_k, y_k > 0} \partial_t y_k e^{-x_k/X} - \frac{1}{X} y_k e^{-x_k/X} \partial_t x_k.</math>
We formally have
:<math> \partial_t y_k = \sum_{j \neq k} \frac{2 (y_j - y_k)}{(x_k-x_j)^2 + (y_k-y_j)^2} </math>
and
:<math> \partial_t x_k = -\sum_{j \neq k} \frac{2 (x_j - x_k)}{(x_k-x_j)^2 + (y_k-y_j)^2} </math>
so the right-hand side is equal to
:<math>\partial_t S(t) := \sum_{k: x_k, y_k > 0} \sum_{j \neq k} \frac{2}{(x_k-x_j)^2 + (y_k-y_j)^2} ( (y_j - y_k) e^{-x_k/X} + \frac{1}{X} y_k e^{-x_k/X} (x_j - x_k) ).</math>
Consider first the contribution of those terms with <math>x_j \leq 0</math>.  The <math>(x_j-x_k)</math> term is negative.  The <math>y_j-y_k</math> term is also negative if <math>y_j \neq 0</math>, so suppose <math>y_j \neq 0</math>.  By symmetry we get a contribution of
:<math> e^{-x_k/X} ( \frac{y_j - y_k}{(x_k-x_j)^2 + (y_k-y_j)^2} + \frac{-y_j - y_k}{(x_k-x_j)^2 + (y_k+y_j)^2} </math>
and this should also be negative by Lemma 2 (provided we can ensure that say <math>x_j \leq -1</math> and <math>x_k \geq 1</math>, but this should be easy to do.
Next we consider the contribution of those terms with <math>x_j > 0, y_j \neq 0</math> and <math>|x_k-x_j| \geq 1</math>.  The <math>(x_j-x_k)</math> terms can be summed using the Riemann von Mangoldt formula to be at most <math>O( \frac{\log^2 X}{X} S(t))</math>.  The <math>y_j-y_k</math> terms, when paired with their complex conjugates, are negative thanks to Lemma 2. 
Next consider the contribution of those terms with <math>x_j > 0</math> and <math>y_j \leq 0</math> that have not previously been treated.  The <math>(y_j-y_k)</math> term is now negative (and equal to <math>\frac{-1}{y_k} e^{-x_k/X}$ when looking at the complex conjugate), and the <math>(x_j-x_k)</math> term is negative unless <math>x_j \geq x_k</math>.  The terms here sum to at most
:<math> \frac{1}{X} \sum_{k: x_k, y_k > 0} y_k e^{-x_k/X} ( \sum_{j: x_j \geq x_k; y_j \leq 0} \frac{2(x_j-x_k)}{(x_k-x_j)^2 + y_k^2} - \frac{1}{y_k^2})</math>
which by the Riemann von Mangoldt formula is
:<math> \frac{1}{X} \sum_{k: x_k, y_k > 0} y_k e^{-x_k/X} ( O( \log^2 x_k) + \frac{O( \log x_k) }{y_k} - \frac{1}{y_k^2})</math>
which by Young's inequality (and another appeal to Riemann von Mangoldt to handle the tail when say <math>x_k \geq X^{20}</math>) sums to
:<math> \frac{1}{X} \sum_{k: x_k, y_k > 0} y_k e^{-x_k/X} O( \log^2 x_k) = O( \frac{\log^2 X}{X} S(t) + X^{-10} ).</math>
Finally, we consider the contribution of those terms with <math>x_j,y_j > 0</math> and <math>|x_k-x_j| \leq 1</math>. Here we can symmetrise to get
:<math> \sum_{j,k: x_k,y_k,x_j,y_k > 0: |x_k-x_j| \leq 1} \frac{1}{(x_k-x_j)^2 + (y_k-y_j)^2} ( (y_j - y_k) (e^{-x_k/X} - e^{-x_j/X}) + \frac{1}{X} (y_k e^{-x_k/X} - y_j e^{-x_j/X}) (x_j - x_k) ).</math>
By Taylor expansion we have
:<math>e^{-x_k/X} - e^{-x_j/X} = e^{-x_k/X} ((x_j-x_k)/X + O( |x_j-x_k|/X^2) )</math>
and also
:<math>y_k e^{-x_k/X} - y_j e^{-x_j/X} = (y_k - y_j) e^{-x_k/X} + O(|x_j-x_k|/X)</math>
so we can cancel down to
:<math> \sum_{j,k: x_k,y_k,x_j,y_k > 0: |x_k-x_j| \leq 1} \frac{1}{(x_k-x_j)^2 + (y_k-y_j)^2} (y_j - y_k) O( |x_j-x_k|/X^2 ) e^{-x_k/X}</math>
which by the arithmetic mean geometric mean inequality followed by Riemann von Mangoldt is bounded by
:<math> O( \frac{1}{X^2} \sum_{j,k: x_k,y_k,x_j,y_k > 0: |x_k-x_j| \leq 1} e^{-x_k/X} )</math>
:<math> \leq O( \frac{1}{X^2} \sum_{x_k > 0} e^{-x_k/X} \log X )</math>
:<math> \leq O( \frac{\log^2 X}{X}  ).</math>
Putting all this together we obtain
:<math> \partial_t S(t) \leq O( \frac{\log^2 X}{X} S(t) + \frac{\log^2 X}{X} )</math>
and in particular by Gronwall's inequality
:<math> S(t) \leq \exp( O( \frac{\log^2 X}{X} ) ) S(t) + O( \frac{\log^2 X}{X} )</math>
for <math>0 \leq t \leq 1/2</math> (say), which should be good enough to create non-trivial zero-free regions for <math>H_t</math>.

Revision as of 15:26, 26 March 2018

This is a sub-page of page on the De Bruijn-Newman constant, and assumes all the notation from that page.

The entire functions [math]\displaystyle{ H_t(z) }[/math] obey the backwards heat equation

[math]\displaystyle{ \displaystyle \partial_t H_t(z) = - \partial_{zz} H_t(z). }[/math]

Dynamics of a simple zero

If [math]\displaystyle{ H_t }[/math] has a simple zero at [math]\displaystyle{ z_j(t) }[/math], then by the implicit function theorem [math]\displaystyle{ z_j(t) }[/math] varies in a continuously differentiable manner (in fact analytic) for nearby times [math]\displaystyle{ t }[/math]. By implicitly differentiating the equation [math]\displaystyle{ H_t(z_j(t)) = 0 }[/math], we see that

[math]\displaystyle{ \displaystyle \partial_t z_j(t) = - \frac{\partial_t H_t(z_j(t))}{\partial_z H_t(z_j(t))} = \frac{\partial_{zz} H_t(z_j(t))}{\partial_z H_t(z_j(t))}. }[/math]

(See also [CSV1994, Lemma 2.1].) Being a simple zero, we have the Taylor expansion

[math]\displaystyle{ \displaystyle H_t(z) = a (z-z_j(t)) + b (z-z_j(t))^2 + O( |z-z_j(t)|^3 ) }[/math]

for some complex numbers [math]\displaystyle{ a,b }[/math] with [math]\displaystyle{ a \neq 0 }[/math], and for [math]\displaystyle{ z }[/math] close to [math]\displaystyle{ z_j(t) }[/math]. In particular

[math]\displaystyle{ \displaystyle \partial_z H_t(z) = a + 2 b (z-z_j(t))^2 + O( |z-z_j(t)|^2 ) }[/math]
[math]\displaystyle{ \displaystyle \partial_{zz} H_t(z) = 2 b + O( |z-z_j(t)| ) }[/math]

which implies that

[math]\displaystyle{ \frac{\partial_{zz} H_t(z_j(t))}{\partial_z H_t(z_j(t))} = \frac{2b}{a} }[/math]

and also that

[math]\displaystyle{ \frac{\partial_{z} H_t(z)}{H_t(z)} = \frac{1}{z-z_j(t)} + \frac{b}{a} + O( |z-z_j(t)| ) }[/math]

and thus

[math]\displaystyle{ \displaystyle \partial_t z_j(t) = 2 \lim_{z \to z_j(t)} \frac{\partial_z H_t(z)}{H_t(z)} - \frac{1}{z-z_j(t)}. }[/math]

As [math]\displaystyle{ H_t }[/math] is even, of order 1, and has no zero at the origin, we see from the Hadamard factorisation theorem that

[math]\displaystyle{ H_t(z) = C_t \prod_{k=1}^\infty (1 - \frac{z}{z_k(t)}) (1 + \frac{z}{z_k(t)}) }[/math]

for some constant [math]\displaystyle{ C_t }[/math], and hence

[math]\displaystyle{ \frac{\partial_z H_t(z)}{H_t(z)} = \sum_k \frac{1}{z - z_k(t)} }[/math]

where the sum is in a principal value sense. Thus we have

[math]\displaystyle{ \displaystyle \partial_t z_j(t) = 2 \sum_{k \neq j} \frac{1}{z_j(t) - z_k(t)} }[/math]

where the sum is again in a principal value sense (cf. [CSV1994, Lemma 2.4]).

Dynamics of a repeated zero

Now suppose that at some time [math]\displaystyle{ t_0 }[/math] one has a repeated zero at [math]\displaystyle{ z_0 }[/math] of some order [math]\displaystyle{ k \geq 1 }[/math], thus

[math]\displaystyle{ \displaystyle H_{t_0}(z) = a_k (z-z_0)^k + O( |z-z_0|^{k+1} ) }[/math]

for some non-zero [math]\displaystyle{ a_k }[/math] and [math]\displaystyle{ z }[/math] close to [math]\displaystyle{ z_0 }[/math]. Using the backwards heat equation we then have

[math]\displaystyle{ \displaystyle \partial_t H_{t_0}(z) = - k(k-1) a_k (z-z_0)^{k-2} + O( |z-z_0|^{\max(k-1,0)} ) }[/math]

and more generally

[math]\displaystyle{ \displaystyle \partial_t^j H_{t_0}(z) = (-1)^j k(k-1) \dots (k-2j+1) a_k (z-z_0)^{k-2j} + O( |z-z_0|^{\max(k-2j+1,0)} ) }[/math]

for any fixed [math]\displaystyle{ j }[/math]. Performing Taylor expansion in time, we conclude that in the regime [math]\displaystyle{ z - z_0 = O( |t-t_0|^{1/2} ) }[/math] and [math]\displaystyle{ t }[/math] close to but not equal to [math]\displaystyle{ t_0 }[/math], one has

[math]\displaystyle{ \displaystyle H_t(z) = a_k ((t-t_0)^{1/2})^k ( P_k( \frac{z-z_0}{(t-t_0)^{1/2}} ) + O( |t-t_0|^{1/2} ) ) }[/math]

where we use some branch of the square root and [math]\displaystyle{ P_k }[/math] is the degree [math]\displaystyle{ k }[/math] polynomial

[math]\displaystyle{ P_k(z) := \exp(-\partial_{zz}) z^k = \sum_{0 \leq j \leq k/2} (-1)^j k (k-1) \dots (k-2j+1) z^{k-2j}. }[/math]

We claim that the [math]\displaystyle{ k }[/math] zeroes [math]\displaystyle{ x_{k,1},\dots,x_{k,k} }[/math] of [math]\displaystyle{ P_k }[/math] are real and simple. If so, then by Rouche's theorem we conclude that for [math]\displaystyle{ t }[/math] close to [math]\displaystyle{ t_0 }[/math], the [math]\displaystyle{ k }[/math] zeroes of [math]\displaystyle{ H_t }[/math] close to [math]\displaystyle{ z_0 }[/math] take the form

[math]\displaystyle{ z_0 + (t-t_0)^{1/2}( x_j + O( |t-t_0|^{1/2} ) ) }[/math]

for [math]\displaystyle{ j=1,\dots,k }[/math]. In particular, the zeroes approach [math]\displaystyle{ z_0 }[/math] from an asymptotically vertical direction as [math]\displaystyle{ t \to t_0^- }[/math] and repel in an asymptotically horizontal direction as [math]\displaystyle{ t \to t_0^+ }[/math].

Now we verify the claim. Observe that [math]\displaystyle{ P_k }[/math] obeys the ODE

[math]\displaystyle{ -2 \partial_{zz} P_k + z \partial_z P_k = k P_k. }[/math]

This already shows that [math]\displaystyle{ P_k }[/math] cannot have a repeated zero (because [math]\displaystyle{ \partial_{zz} P_k }[/math] would then vanish to lower order than either [math]\displaystyle{ z \partial_z P_k }[/math] or [math]\displaystyle{ k P_k }[/math]). Factoring [math]\displaystyle{ P_k(z) = (z-z_1) \dots (z-z_k) }[/math], and evaluating the above at some zero [math]\displaystyle{ z_j }[/math], we conclude that

[math]\displaystyle{ -2 \sum_{j \neq l} \frac{1}{z_l-z_j} + z_j = 0. }[/math]

If [math]\displaystyle{ z_j }[/math] is a zero with maximal imaginary part, and this imaginary part is positive, then the left-hand side of this equation has positive imaginary part, which is absurd. Thus there are no zeroes with positive imaginary part, and similarly none with negative imaginary part, hence all zeroes are real. (One can also establish these claims using Sturm's theorem; see this comment.)

Remark The [math]\displaystyle{ z_j }[/math] minimize the Hamiltonian [math]\displaystyle{ \sum_{j \neq l} \log \frac{1}{|z_j-z_l|} + \sum_j \frac{1}{2} |z_j|^2 }[/math] and are also known as Fekete points in random matrix theory. For large values of [math]\displaystyle{ k }[/math], the distribution of these Fekete points approaches a semicircular distribution.

Bounding [math]\displaystyle{ \Lambda }[/math]

Theorem 1 Suppose that all zeroes of [math]\displaystyle{ H_t(x+iy)=0 }[/math] have imaginary part at most [math]\displaystyle{ \varepsilon }[/math]. Then [math]\displaystyle{ \Lambda \leq t + \frac{1}{2} \varepsilon^2 }[/math].

Proof See [B1950, Theorem 13]. Informally, the claim follows by using the imaginary part dynamics

[math]\displaystyle{ \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} }[/math]

to conclude that for the highest zero (with the maximum value of [math]\displaystyle{ y_j }[/math]), one has

[math]\displaystyle{ \partial_t y_j \leq - \frac{1}{y_j} }[/math]

if [math]\displaystyle{ y_j \gt 0 }[/math], due to the attraction that that zero [math]\displaystyle{ x_j+iy_j }[/math] feels to its conjugate [math]\displaystyle{ x_j - iy_j }[/math]. (One has to exclude the repeated zeroes from this analysis somehow.) [math]\displaystyle{ \Box }[/math]

In a similar spirit, we have

Lemma 2 Suppose [math]\displaystyle{ x_j+y_j }[/math] is a simple zero of [math]\displaystyle{ H_t }[/math] with [math]\displaystyle{ y_j\gt 0 }[/math] and no zeroes in the hyperbola [math]\displaystyle{ \{ x+iy: y \gt \sqrt{(x-x_j)^2 + y_j^2} }[/math]. Then [math]\displaystyle{ \partial_t y_j }[/math] is negative.

Proof The contribution of the conjugate [math]\displaystyle{ x_j-iy_j }[/math] is negative, as is the contribution of any real zero [math]\displaystyle{ x_k }[/math]. It remains to show that the contribution of any other pair [math]\displaystyle{ x_k+iy_k, x_k - iy_k }[/math] is non-positive, that is to say that

[math]\displaystyle{ \frac{y_k-y_j}{(x_k-x_j)^2 + (y_k-y_j)^2} + \frac{-y_k-y_j}{(x_k-x_j)^2 + (-y_k-y_j)^2} \leq 0. }[/math]

Cross-multiplying and canceling, the above inequality eventually simplifies to

[math]\displaystyle{ y_k^2 \leq (x_k -x_j)^2 + y_j^2) }[/math]

which is automatic due to the hypothesis. [math]\displaystyle{ \Box }[/math]

Corollary 3 Suppose that one has parameters [math]\displaystyle{ t_0, T, \varepsilon \gt 0 }[/math] obeying the following properties:

  • All the zeroes of [math]\displaystyle{ H_0(x+iy)=0 }[/math] with [math]\displaystyle{ 0 \leq x \leq T }[/math] are real.
  • There are no zeroes [math]\displaystyle{ H_t(x+iy)=0 }[/math] with [math]\displaystyle{ 0 \leq t \leq t_0 }[/math] in the region [math]\displaystyle{ \{ x+iy: x \geq T; 1-2t \geq y^2 \geq \varepsilon^2 + (T-x)^2 \} }[/math].
  • There are no zeroes [math]\displaystyle{ H_{t_0}(x+iy)=0 }[/math] with [math]\displaystyle{ x \gt T }[/math] and [math]\displaystyle{ y \geq \varepsilon }[/math].

Then one has [math]\displaystyle{ \Lambda \leq t_0 + \frac{1}{2} \varepsilon^2 }[/math].

Proof (Sketch) By the previous lemma, all the zeroes [math]\displaystyle{ x_j+iy_j }[/math] of [math]\displaystyle{ H_{t} }[/math] for [math]\displaystyle{ 0 \lt t \lt t_0 }[/math], [math]\displaystyle{ 0 \leq x \leq T }[/math] and [math]\displaystyle{ y \gt \varepsilon }[/math] obey [math]\displaystyle{ \partial_t y_j \lt 0 }[/math]. By a continuity argument this shows in fact that there are no such zeroes. In particular all zeroes of [math]\displaystyle{ H_{t_0} }[/math] have imaginary part at most [math]\displaystyle{ \varepsilon }[/math], and the claim then follows from Theorem 1. [math]\displaystyle{ \Box }[/math]


Derivative analysis

Let [math]\displaystyle{ X }[/math] be a large quantity, and consider the expression

[math]\displaystyle{ S(t) := \sum_{k: x_k, y_k \gt 0} y_k e^{-x_k/X} }[/math]

where [math]\displaystyle{ x_k+iy_k }[/math] ranges over the zeroes of [math]\displaystyle{ H_t }[/math]. The exponential decay factor should allow us to justify termwise differentiation etc., so we obtain

[math]\displaystyle{ \partial_t S(t) := \sum_{k: x_k, y_k \gt 0} \partial_t y_k e^{-x_k/X} - \frac{1}{X} y_k e^{-x_k/X} \partial_t x_k. }[/math]

We formally have

[math]\displaystyle{ \partial_t y_k = \sum_{j \neq k} \frac{2 (y_j - y_k)}{(x_k-x_j)^2 + (y_k-y_j)^2} }[/math]

and

[math]\displaystyle{ \partial_t x_k = -\sum_{j \neq k} \frac{2 (x_j - x_k)}{(x_k-x_j)^2 + (y_k-y_j)^2} }[/math]

so the right-hand side is equal to

[math]\displaystyle{ \partial_t S(t) := \sum_{k: x_k, y_k \gt 0} \sum_{j \neq k} \frac{2}{(x_k-x_j)^2 + (y_k-y_j)^2} ( (y_j - y_k) e^{-x_k/X} + \frac{1}{X} y_k e^{-x_k/X} (x_j - x_k) ). }[/math]

Consider first the contribution of those terms with [math]\displaystyle{ x_j \leq 0 }[/math]. The [math]\displaystyle{ (x_j-x_k) }[/math] term is negative. The [math]\displaystyle{ y_j-y_k }[/math] term is also negative if [math]\displaystyle{ y_j \neq 0 }[/math], so suppose [math]\displaystyle{ y_j \neq 0 }[/math]. By symmetry we get a contribution of

[math]\displaystyle{ e^{-x_k/X} ( \frac{y_j - y_k}{(x_k-x_j)^2 + (y_k-y_j)^2} + \frac{-y_j - y_k}{(x_k-x_j)^2 + (y_k+y_j)^2} }[/math]

and this should also be negative by Lemma 2 (provided we can ensure that say [math]\displaystyle{ x_j \leq -1 }[/math] and [math]\displaystyle{ x_k \geq 1 }[/math], but this should be easy to do.

Next we consider the contribution of those terms with [math]\displaystyle{ x_j \gt 0, y_j \neq 0 }[/math] and [math]\displaystyle{ |x_k-x_j| \geq 1 }[/math]. The [math]\displaystyle{ (x_j-x_k) }[/math] terms can be summed using the Riemann von Mangoldt formula to be at most [math]\displaystyle{ O( \frac{\log^2 X}{X} S(t)) }[/math]. The [math]\displaystyle{ y_j-y_k }[/math] terms, when paired with their complex conjugates, are negative thanks to Lemma 2.

Next consider the contribution of those terms with [math]\displaystyle{ x_j \gt 0 }[/math] and [math]\displaystyle{ y_j \leq 0 }[/math] that have not previously been treated. The [math]\displaystyle{ (y_j-y_k) }[/math] term is now negative (and equal to [math]\displaystyle{ \frac{-1}{y_k} e^{-x_k/X}$ when looking at the complex conjugate), and the \lt math\gt (x_j-x_k) }[/math] term is negative unless [math]\displaystyle{ x_j \geq x_k }[/math]. The terms here sum to at most

[math]\displaystyle{ \frac{1}{X} \sum_{k: x_k, y_k \gt 0} y_k e^{-x_k/X} ( \sum_{j: x_j \geq x_k; y_j \leq 0} \frac{2(x_j-x_k)}{(x_k-x_j)^2 + y_k^2} - \frac{1}{y_k^2}) }[/math]

which by the Riemann von Mangoldt formula is

[math]\displaystyle{ \frac{1}{X} \sum_{k: x_k, y_k \gt 0} y_k e^{-x_k/X} ( O( \log^2 x_k) + \frac{O( \log x_k) }{y_k} - \frac{1}{y_k^2}) }[/math]

which by Young's inequality (and another appeal to Riemann von Mangoldt to handle the tail when say [math]\displaystyle{ x_k \geq X^{20} }[/math]) sums to

[math]\displaystyle{ \frac{1}{X} \sum_{k: x_k, y_k \gt 0} y_k e^{-x_k/X} O( \log^2 x_k) = O( \frac{\log^2 X}{X} S(t) + X^{-10} ). }[/math]

Finally, we consider the contribution of those terms with [math]\displaystyle{ x_j,y_j \gt 0 }[/math] and [math]\displaystyle{ |x_k-x_j| \leq 1 }[/math]. Here we can symmetrise to get

[math]\displaystyle{ \sum_{j,k: x_k,y_k,x_j,y_k \gt 0: |x_k-x_j| \leq 1} \frac{1}{(x_k-x_j)^2 + (y_k-y_j)^2} ( (y_j - y_k) (e^{-x_k/X} - e^{-x_j/X}) + \frac{1}{X} (y_k e^{-x_k/X} - y_j e^{-x_j/X}) (x_j - x_k) ). }[/math]

By Taylor expansion we have

[math]\displaystyle{ e^{-x_k/X} - e^{-x_j/X} = e^{-x_k/X} ((x_j-x_k)/X + O( |x_j-x_k|/X^2) ) }[/math]

and also

[math]\displaystyle{ y_k e^{-x_k/X} - y_j e^{-x_j/X} = (y_k - y_j) e^{-x_k/X} + O(|x_j-x_k|/X) }[/math]

so we can cancel down to

[math]\displaystyle{ \sum_{j,k: x_k,y_k,x_j,y_k \gt 0: |x_k-x_j| \leq 1} \frac{1}{(x_k-x_j)^2 + (y_k-y_j)^2} (y_j - y_k) O( |x_j-x_k|/X^2 ) e^{-x_k/X} }[/math]

which by the arithmetic mean geometric mean inequality followed by Riemann von Mangoldt is bounded by

[math]\displaystyle{ O( \frac{1}{X^2} \sum_{j,k: x_k,y_k,x_j,y_k \gt 0: |x_k-x_j| \leq 1} e^{-x_k/X} ) }[/math]
[math]\displaystyle{ \leq O( \frac{1}{X^2} \sum_{x_k \gt 0} e^{-x_k/X} \log X ) }[/math]
[math]\displaystyle{ \leq O( \frac{\log^2 X}{X} ). }[/math]

Putting all this together we obtain

[math]\displaystyle{ \partial_t S(t) \leq O( \frac{\log^2 X}{X} S(t) + \frac{\log^2 X}{X} ) }[/math]

and in particular by Gronwall's inequality

[math]\displaystyle{ S(t) \leq \exp( O( \frac{\log^2 X}{X} ) ) S(t) + O( \frac{\log^2 X}{X} ) }[/math]

for [math]\displaystyle{ 0 \leq t \leq 1/2 }[/math] (say), which should be good enough to create non-trivial zero-free regions for [math]\displaystyle{ H_t }[/math].