Prime counting function

From Polymath Wiki
Revision as of 07:46, 19 August 2009 by Teorth (talk | contribs)
Jump to navigationJump to search

One way to find primes is to find a polynomial time algorithm to compute [math]\displaystyle{ \pi(x) }[/math], the number of primes less than x, to reasonable accuracy (e.g. if we can do better than [math]\displaystyle{ 10^{k/2} }[/math] accuracy for k-digit x, we can break the square root barrier.) We don't necessarily have to do this for all x; just having a targeted x for which we can show [math]\displaystyle{ \pi(x+y)-\pi(x) \gt 0 }[/math] for some [math]\displaystyle{ x \sim 10^k }[/math] and [math]\displaystyle{ y=o(10^{k/2}) }[/math] would suffice.

Now, perhaps instead of trying to prove that intervals like [x, x + (\log x)^A] contain primes unconditionally, we should first try to be much less ambitious and aim to show that *some* interval [y, y + \sqrt{x}] with y \in [x,2x], contains a prime number that we can discover computationally.

How? Well, let’s start by assuming that we can computationally locate all the O( \sqrt{x} \log x) zeros in the critical strip up to height \sqrt{x}. Then what we can do is some kind of “binary search” to locate an interval [y, y + \sqrt{x}] containing loads of primes: say at the ith step in the iteration we have that some interval [u,v] has loads of primes. Then, using the explicit formula for \psi(z) = \sum_{n \leq z} \Lambda_0(n) = z - \sum_{\rho : \zeta(\rho)=0} z^\rho/\rho (actually, the usual quantitative form of the identity), we can decide which of the two intervals [u, (u+v)/2] or [(u+v)/2, v] contains loads of primes (maybe both do — if so, pick either one for the next step in the iteration). We keep iterating like this until we have an interval of width around x^{1/2 + \epsilon} or so that we know contains primes.

Ok, but how do you locate the zeros of the zeta function up to height \sqrt{x}? Well, I’m not sure, but maybe one can try something like the following: if we can understand the value of \zeta(s) for enough well-spaced, but close together, points up the half-line, then by taking local interpolations with these points, we can locate the zeros to good precision. And then to evaluate \zeta(s) at these well-spaced points, maybe we can use a Dirichlet polynomial approximations, and then perhaps apply some variant of Fast Fourier Transforms (if this is even possible with Dirichlet polynomials, which are not really polynomials) to evaluate them at lots of values s = 1/2 + \delta quickly — perhaps FFTs can speed things up enough so that the whole process doesn’t take more than, say, 10^{k/2} polylog(k) bit operations. Keep in mind also that our Dirichlet polynomial approximation only needs to hold “on average” once we are sufficiently high up the half-line, so it seems quite plausible that this could work (and for s near to 1/2 we would need to be more careful, and get the sharpest approximation we can, because those terms contribute more in the explicit formula).


Computing the parity of [math]\displaystyle{ \pi(x) }[/math]

Interestingly, there is an elementary way to compute the parity of [math]\displaystyle{ \pi(x) }[/math] in [math]\displaystyle{ x^{1/2+o(1)} }[/math] time. The observation is that for square-free n, the divisor function [math]\displaystyle{ \tau(n) }[/math] (the number of divisors of n) is equal to 2 mod 4 if n is prime, and is divisible by 4 otherwise. This gives the identity

[math]\displaystyle{ 2 \pi(x) = \sum_{n\lt x} \tau(n) \mu(n)^2 \hbox{ mod } 4. }[/math]

Thus, to compute the parity of [math]\displaystyle{ \pi(x) }[/math], it suffices to compute [math]\displaystyle{ \sum_{n\lt x} \tau(n) \mu(n)^2 }[/math].

But by Mobius inversion, one can express [math]\displaystyle{ \tau(n) \mu(n)^2 = \sum_{d^2|n} \mu(d) \tau(n) }[/math] and so

[math]\displaystyle{ \sum_{n\lt x} \tau(n) \mu(n)^2 = \sum_{d \lt x^{1/2}} \mu(d) \sum_{n\lt x: d^2 | n} \tau(n). }[/math]

Since one can compute all the [math]\displaystyle{ \mu(d) }[/math] for [math]\displaystyle{ d \lt x^{1/2} }[/math] in [math]\displaystyle{ x^{1/2+o(1)} }[/math] time, it would suffice to compute [math]\displaystyle{ \sum_{n\lt x: d^2 | n} \tau(n) }[/math] in [math]\displaystyle{ (x/d^2)^{1/2} x^{o(1)} }[/math] time for each d. One can use the multiplicativity properties of [math]\displaystyle{ \tau }[/math] to decompose this sum as a combination of [math]\displaystyle{ x^{o(1)} }[/math] sums of the form [math]\displaystyle{ \sum_{n\lt y} \tau(n) }[/math] for various [math]\displaystyle{ y \leq x/d^2 }[/math], so it suffices to show that [math]\displaystyle{ \sum_{n\lt y} \tau(n)= \sum_{a,b: ab \lt y} 1 }[/math] can be computed in [math]\displaystyle{ y^{1/2+o(1)} }[/math] time. But this can be done by the Gauss hyperbola method, indeed

[math]\displaystyle{ \sum_{a,b: ab \lt y} 1 = 2 \sum_{a \lt \sqrt{y}} \lfloor \frac{y}{a} \rfloor - \lfloor \sqrt{y} \rfloor^2. }[/math]

The same method lets us compute [math]\displaystyle{ \pi(x) }[/math] mod 3 efficiently provided one can compute [math]\displaystyle{ \sum_{n\lt x} \tau_2(n) = \sum_{a,b,c: abc \lt x} 1 }[/math] efficiently. Unfortunately, so far the best algorithm for this takes time [math]\displaystyle{ x^{2/3+o(1)} }[/math]. If one can compute [math]\displaystyle{ \pi(x) }[/math] mod q for every modulus q up to O(\log x), one can compute [math]\displaystyle{ \pi(x) }[/math] by the Chinese remainder theorem.

Related to this approach, there is a nice identity of Linnik. Let [math]\displaystyle{ \Lambda(n) }[/math] be the van Mangoldt function and [math]\displaystyle{ t_{j}(n) }[/math] the number of representations of n as ordered products of integers greater than 1, then [math]\displaystyle{ \Lambda(n) = \ln(n) \sum_{j=1}^{\infty} \frac{(-1)^{j-1}}{j} t_{j}(n) }[/math]. The sum is rather short since [math]\displaystyle{ t_{j}(n)=0 }[/math] for j larger than about \ln(n). Note that the function [math]\displaystyle{ t_{j}(n) }[/math] is related to [math]\displaystyle{ \tau_{k}(n) }[/math] by the relation [math]\displaystyle{ t_j(n) = \sum_{k=0}^{j}(-1)^{j-k} {j \choose k} \tau_{k}(n) }[/math]. Again, [math]\displaystyle{ t_{2}(n) }[/math] is computable in n^{1/2} steps, however [math]\displaystyle{ t_{j}(n) }[/math], for larger j, appears more complicated. Curiously this is a fundamental ingredient in the Friedlander and Iwaniec work.