Prime counting function

From Polymath1Wiki

Jump to: navigation, search

One way to find primes is to find a polynomial time algorithm to compute π(x), the number of primes less than x, to reasonable accuracy. For example, if we can find π(x) to better than 10k / 2 accuracy for k-digit x, we can break the square root barrier. We don't necessarily have to do this for all x; just having a targeted x for which we can show π(x + y) − π(x) > 0 for some x˜10k and y = o(10k / 2) would suffice.

Now, perhaps instead of trying to prove that intervals like [x,x + (logx)A] contain primes unconditionally, we should first try to be much less ambitious and aim to show that *some* interval [y, y + \sqrt{x}] with y \in [x,2x], contains a prime number that we can discover computationally.

How? Well, let’s start by assuming that we can computationally locate all the O( \sqrt{x} \log x) zeros in the critical strip up to height \sqrt{x}. Then what we can do is some kind of “binary search” to locate an interval [y, y + \sqrt{x}] containing loads of primes: say at the ith step in the iteration we have that some interval [u,v] has loads of primes. Then, using the explicit formula for \psi(z) = \sum_{n \leq z} \Lambda_0(n) = z - \sum_{\rho : \zeta(\rho)=0} z^\rho/\rho (actually, the usual quantitative form of the identity), we can decide which of the two intervals [u,(u + v) / 2] or [(u + v) / 2,v] contains loads of primes (maybe both do — if so, pick either one for the next step in the iteration). We keep iterating like this until we have an interval of width around x1 / 2 + ε or so that we know contains primes.

Ok, but how do you locate the zeros of the zeta function up to height \sqrt{x}? Well, I’m not sure, but maybe one can try something like the following: if we can understand the value of ζ(s) for enough well-spaced, but close together, points up the half-line, then by taking local interpolations with these points, we can locate the zeros to good precision. And then to evaluate ζ(s) at these well-spaced points, maybe we can use a Dirichlet polynomial approximations, and then perhaps apply some variant of Fast Fourier Transforms (if this is even possible with Dirichlet polynomials, which are not really polynomials) to evaluate them at lots of values s = 1 / 2 + δ quickly — perhaps FFTs can speed things up enough so that the whole process doesn’t take more than, say, 10k / 2 polylog(k) bit operations. Keep in mind also that our Dirichlet polynomial approximation only needs to hold “on average” once we are sufficiently high up the half-line, so it seems quite plausible that this could work. Note that for s near to 1/2 we would need to be more careful, and get the sharpest approximation we can, because those terms contribute more in the explicit formula.


Computing the parity of π(x)

Interestingly, there is an elementary way to compute the parity of π(x) in x1 / 2 + o(1) time. The observation is that for square-free n, the divisor function τ(n) (the number of divisors of n) is equal to 2 mod 4 if n is prime, and is divisible by 4 otherwise. This gives the identity

2π(x) = τ(n)μ(n)2 mod 4.
n < x

Thus, to compute the parity of π(x), it suffices to compute

τ(n)μ(n)2
n < x

.

But by Mobius inversion, one can express  \tau(n) \mu(n)^2 = \sum_{d^2|n} \mu(d) \tau(n) and so

\sum_{n<x} \tau(n) \mu(n)^2 = \sum_{d < x^{1/2}} \mu(d) \sum_{n<x: d^2 | n} \tau(n).

Since one can compute all the μ(d) for d < x1 / 2 in x1 / 2 + o(1) time, it would suffice to compute \sum_{n<x: d^2 | n} \tau(n) in (x / d2)1 / 2xo(1) time for each d. One can use the multiplicativity properties of τ to decompose this sum as a combination of xo(1) sums of the form

τ(n)
n < y

for various y \leq x/d^2, so it suffices to show that

τ(n) = 1
n < ya,b:ab < y

can be computed in y1 / 2 + o(1) time. But this can be done by the Gauss hyperbola method, indeed

 \sum_{a,b: ab < y} 1 = 2 \sum_{a < \sqrt{y}} \lfloor \frac{y}{a} \rfloor - \lfloor \sqrt{y} \rfloor^2.

The same method lets us compute π(x) mod 3 efficiently provided one can compute

τ2(n) = 1
n < xa,b,c:abc < x

efficiently. Unfortunately, so far the best algorithm for this takes time x2 / 3 + o(1). If one can compute π(x) mod q for every modulus q up to O(\log x), one can compute π(x) by the Chinese remainder theorem.

Related to this approach, there is a nice identity of Linnik. Let Λ(n) be the van Mangoldt function and tj(n) the number of representations of n as ordered products of integers greater than 1, then \Lambda(n) = \ln(n) \sum_{j=1}^{\infty} \frac{(-1)^{j-1}}{j} t_{j}(n). The sum is rather short since tj(n) = 0 for j larger than about \ln(n). Note that the function tj(n) is related to τk(n) by the relation t_j(n) = \sum_{k=0}^{j}(-1)^{j-k} {j \choose k} \tau_{k}(n). Again, t2(n) is computable in n^{1/2} steps, however tj(n), for larger j, appears more complicated. Curiously this is a fundamental ingredient in the Friedlander and Iwaniec work.

Breaking the square root barrier

It is known that breaking the square root barrier for \sum_{n \leq x} \tau(n) breaks the square root barrier for the parity of π(x) also: specifically, if the former can be computed in time x1 / 2 − ε + o(1) for some ε < 1 / 4, then the latter can be computed in time x1 / 4 + 1 / (4 + 16ε / 3) + o(1). Details are here.

Using Farey sequences, one can compute the former sum in time x1 / 3 + o(1) (and hence the latter sum in time x5 / 11 + o(1):

The argument is similar to elementary proofs (such as the one waved at in the exercises to Chapter 3 of Vinogradov's Elements of Number Theory) of the fact that the number of points under the hyperbola equals (mainterm) + O(N1 / 3(logN)2).

What we must do is compute \sum_{n <= N^{ 1/2 } } \lfloor N/n \rfloor in time O(N1 / 3(logN)3).

Lemma 1 Let x be about Nθ. Assume that N/x^2 = a/q + \eta/q^2, \hbox{gcd}(a,q)=1, \eta<=1/\sqrt{5}. Assume furthermore that

q < = Q, where Q = Nθ − 1 / 3 / 10. Then the sum

{N / n}
x < = n < x + q

can be computed in time O(logx) with an error term < 1 / 2.

Proof

We can write

N / n = N / xN / x2t + η2t2 / N3θ − 1 = N / x − (a / q + η / q2)t + η2t2 / N3θ − 1,

where n = x + t,0 < = t < q an integer, |\eta|<=1/\sqrt{5} and | η2 | < = 1fs independent of n. Since q < = Q and Q = Nθ − 1 / 3 / 10, we have ηt2 / N3θ − 1 < 1 / 1000q. We also have \eta/q^2 t <= 1/\sqrt{5} q. Thus, ηt2 / N3θ − 1 + η / q2t < 1 / 2q. It follows that

| {N / n} − {N / xat / q} | < 1 / (2q)

except when {N / xat / q}1 − 1 / (2q). That exception can happen for only one value of t=0 \ldots q-1 (namely, when at is congruent mod q to the integer closest to {N / x}q) and we can easily find that t (and isolate it and compute its term exactly) in time O(logn) by taking the inverse of a \mod q.

Thus, we get the sum

{N / n}
x < = n < x + q

in time O(logn) with an error term less than (1 / 2q) * q = 1 / 2 once we know the sum

{N / xat / q}
0 < = t < q

exactly. But this sum is equal to

{r / q + ε / q}
0 < = r < q

, where ε: = {qN / x}, and that sum is simply (q − 1) / 2 + ε. Thus, we have computed the sum

{N / n}
x < = n < x + q

in time O(logn). QED


Now we show why the lemma is enough for attaining our goal (namely, computing \sum_{n\leq \sqrt{N}} \lbrack N/n\rbrack with no error term). We know that

[N / n] = N / n{N / n} = N * (log(x + q) − log(x)) − {N / n}.
x < = n < = x + qx < = n < x + qx < = n < x + qx < = n < x + q

We also know that

[N / n]
x < = n < = x + q

is an integer. Thus, it is enough to compute

{N / n}
x < = n < x + q

with an error term <1/2 in order to compute

[N / n]
x < = n < x + q

exactly.

We now partition the range n=\{N^\theta,\ldots,2 N^\theta\}, 1/3<=\theta<=1/2, into intervals of the form x < = n < x + q, where q is the denominator of a good approximation to N / x2, that is to say, an approximation of the form a / q,gcd(a,q) = 1,q < = Q with an error term <= 1/\sqrt{5} q^2. Such good approximations are provided to us by Hurwitz's approximation theorem. Moreover, it shouldn't be hard to show that, as x varies, the q's will be fairly evenly distributed in [1,Q]. (Since Hurwitz's approximation is either one of the ends of the interval containing N / x2 in the Farey series with upper bound Q/2 or the new Farey fraction produced within that interval, it is enough to show that Dirichlet's more familiar approximations have fairly evenly distributed denominators.) This means that 1/q should be about (logQ) / Q in the average.

Thus, the number of intervals of the form x < = n < x + q into which \{N^\theta,\ldots ,2 N^\theta\} has been partitioned should be about (logQ)N / Q. Since the contribution of each interval to the sum \sum_{N^\theta<=n<=2N^\theta} \lfloor N/n\rfloor can (by Lemma 1 and the paragraph after its proof) be computed exactly in time O(logx), we can compute the entire sum in \sum_{N^\theta n<= 2 N^\theta} \lfloor N/n\rfloor in time O((logx)(logQ)Nθ / Q) = O((logN)2N1 / 3).

(There are bits of the sum (at the end and the beginning) that belong to two truncated intervals, but those can be computed in time O(Q) \ll O(N^{1/6}).)

We partition \{1,2,\ldots ,\sqrt{N}\} into O(logN) intervals of the form \{N^\theta,\ldots ,2 N^\theta\}, and obtain a total running time of O((logN)3N1 / 3), as claimed.

Personal tools