# Prime counting function

One way to find primes is to find a polynomial time algorithm to compute $\pi(x)$, the number of primes less than x, to reasonable accuracy. For example, if we can find $\pi(x)$ to better than $10^{k/2}$ accuracy for k-digit x, we can break the square root barrier. We don't necessarily have to do this for all x; just having a targeted x for which we can show $\pi(x+y)-\pi(x) \gt 0$ for some $x \sim 10^k$ and $y=o(10^{k/2})$ would suffice.

Now, perhaps instead of trying to prove that intervals like $[x, x + (\log x)^A]$ contain primes unconditionally, we should first try to be much less ambitious and aim to show that *some* interval $[y, y + \sqrt{x}]$ with $y \in [x,2x]$, contains a prime number that we can discover computationally.

How? Well, let’s start by assuming that we can computationally locate all the $O( \sqrt{x} \log x)$ zeros in the critical strip up to height $\sqrt{x}$. Then what we can do is some kind of “binary search” to locate an interval $[y, y + \sqrt{x}]$ containing loads of primes: say at the $i$th step in the iteration we have that some interval $[u,v]$ has loads of primes. Then, using the explicit formula for $\psi(z) = \sum_{n \leq z} \Lambda_0(n) = z - \sum_{\rho : \zeta(\rho)=0} z^\rho/\rho$ (actually, the usual quantitative form of the identity), we can decide which of the two intervals $[u, (u+v)/2]$ or $[(u+v)/2, v]$ contains loads of primes (maybe both do — if so, pick either one for the next step in the iteration). We keep iterating like this until we have an interval of width around $x^{1/2 + \epsilon}$ or so that we know contains primes.

Ok, but how do you locate the zeros of the zeta function up to height $\sqrt{x}$? Well, I’m not sure, but maybe one can try something like the following: if we can understand the value of $\zeta(s)$ for enough well-spaced, but close together, points up the half-line, then by taking local interpolations with these points, we can locate the zeros to good precision. And then to evaluate $\zeta(s)$ at these well-spaced points, maybe we can use a Dirichlet polynomial approximations, and then perhaps apply some variant of Fast Fourier Transforms (if this is even possible with Dirichlet polynomials, which are not really polynomials) to evaluate them at lots of values $s = 1/2 + \delta$ quickly — perhaps FFTs can speed things up enough so that the whole process doesn’t take more than, say, $10^{k/2}$ polylog(k) bit operations. Keep in mind also that our Dirichlet polynomial approximation only needs to hold “on average” once we are sufficiently high up the half-line, so it seems quite plausible that this could work. Note that for $s$ near to 1/2 we would need to be more careful, and get the sharpest approximation we can, because those terms contribute more in the explicit formula.

## Computing the parity of $\pi(x)$

Interestingly, there is an elementary way to compute the parity of $\pi(x)$ in $x^{1/2+o(1)}$ time. The observation is that for square-free n, the divisor function $\tau(n)$ (the number of divisors of n) is equal to 2 mod 4 if n is prime, and is divisible by 4 otherwise. This gives the identity

$2 \pi(x) = \sum_{n\ltx} \tau(n) \mu(n)^2 \hbox{ mod } 4.$

Thus, to compute the parity of $\pi(x)$, it suffices to compute $\sum_{n\ltx} \tau(n) \mu(n)^2$.

But by Mobius inversion, one can express $\tau(n) \mu(n)^2 = \sum_{d^2|n} \mu(d) \tau(n)$ and so

$\sum_{n\ltx} \tau(n) \mu(n)^2 = \sum_{d \lt x^{1/2}} \mu(d) \sum_{n\ltx: d^2 | n} \tau(n).$

Since one can compute all the $\mu(d)$ for $d \lt x^{1/2}$ in $x^{1/2+o(1)}$ time, it would suffice to compute $\sum_{n\ltx: d^2 | n} \tau(n)$ in $(x/d^2)^{1/2} x^{o(1)}$ time for each d. One can use the multiplicativity properties of $\tau$ to decompose this sum as a combination of $x^{o(1)}$ sums of the form $\sum_{n\lty} \tau(n)$ for various $y \leq x/d^2$, so it suffices to show that $\sum_{n\lty} \tau(n)= \sum_{a,b: ab \lt y} 1$ can be computed in $y^{1/2+o(1)}$ time. But this can be done by the Gauss hyperbola method, indeed

$\sum_{a,b: ab \lt y} 1 = 2 \sum_{a \lt \sqrt{y}} \lfloor \frac{y}{a} \rfloor - \lfloor \sqrt{y} \rfloor^2.$

The same method lets us compute $\pi(x)$ mod 3 efficiently provided one can compute $\sum_{n\ltx} \tau_2(n) = \sum_{a,b,c: abc \lt x} 1$ efficiently. Unfortunately, so far the best algorithm for this takes time $x^{2/3+o(1)}$. If one can compute $\pi(x)$ mod q for every modulus q up to O(\log x), one can compute $\pi(x)$ by the Chinese remainder theorem.

Related to this approach, there is a nice identity of Linnik. Let $\Lambda(n)$ be the van Mangoldt function and $t_{j}(n)$ the number of representations of n as ordered products of integers greater than 1, then $\Lambda(n) = \ln(n) \sum_{j=1}^{\infty} \frac{(-1)^{j-1}}{j} t_{j}(n)$. The sum is rather short since $t_{j}(n)=0$ for j larger than about \ln(n). Note that the function $t_{j}(n)$ is related to $\tau_{k}(n)$ by the relation $t_j(n) = \sum_{k=0}^{j}(-1)^{j-k} {j \choose k} \tau_{k}(n)$. Again, $t_{2}(n)$ is computable in n^{1/2} steps, however $t_{j}(n)$, for larger j, appears more complicated. Curiously this is a fundamental ingredient in the Friedlander and Iwaniec work.

## Breaking the square root barrier

It is known that breaking the square root barrier for $\sum_{n \leq x} \tau(n)$ breaks the square root barrier for the parity of $\pi(x)$ also: specifically, if the former can be computed in time $x^{1/2-\epsilon+o(1)}$ for some $\epsilon \lt 1/4$, then the latter can be computed in time $x^{1/4+1/(4+16\epsilon/3) + o(1)}$. Details are here.

Using Farey sequences, one can compute the former sum in time $x^{1/3+o(1)}$ (and hence the latter sum in time $x^{5/11+o(1)}$:

The argument is similar to elementary proofs (such as the one waved at in the exercises to Chapter 3 of Vinogradov's Elements of Number Theory) of the fact that the number of points under the hyperbola equals $(main term) + O(N^{1/3} (\log N)^2)$.

What we must do is compute $\sum_{n \lt= N^{ 1/2 } } \lfloor N/n \rfloor$ in time $O(N^{1/3} (\log N)^3)$.

Lemma 1 Let $x$ be about $N^{\theta}$. Assume that $N/x^2 = a/q + \eta/q^2, \hbox{gcd}(a,q)=1, \eta\lt=1/\sqrt{5}$. Assume furthermore that

$q\lt=Q$, where $Q=N^{\theta-1/3}/10$. Then the sum $\sum_{x\lt=n\ltx+q} \{N/n\}$ can be computed in time $O(\log x)$ with an error term $\lt 1/2$.

Proof

We can write

$N/n = N/x - N/x^2 t + \eta_2 t^2 / N^{3\theta-1} = N/x - (a/q + \eta/q^2) t + \eta_2 t^2 / N^{3\theta-1}$,

where $n = x + t, 0\lt=t\ltq$ an integer, $|\eta|\lt=1/\sqrt{5}$ and $|\eta_2|\lt=1fs$ independent of n. Since $q\lt=Q$ and $Q=N^{\theta-1/3}/10$, we have $\eta t^2 / N^{3\theta-1} \lt 1/1000q$. We also have $\eta/q^2 t \lt= 1/\sqrt{5} q$. Thus, $\eta t^2 / N^{3\theta-1} + \eta/q^2 t \lt 1/2 q$. It follows that

$|\{N/n\} - \{N/x - at/q\}| \lt 1/(2 q)$

except when $\{N/x-at/q\} 1 - 1/(2 q)$. That exception can happen for only one value of $t=0 \ldots q-1$ (namely, when $at$ is congruent mod q to the integer closest to $\{N/x\} q$) and we can easily find that $t$ (and isolate it and compute its term exactly) in time $O(\log n)$ by taking the inverse of $a \mod q$.

Thus, we get the sum $\sum_{x\lt=n\ltx+q} \{N/n\}$ in time $O(\log n)$ with an error term less than $(1/2 q) * q = 1/2$ once we know the sum

$\sum_{0\lt=t\ltq} \{N/x - at/q\}$

exactly. But this sum is equal to $\sum_{0\lt=r\ltq} \{r/q + \epsilon/q\}$, where $\epsilon := \{qN/x\}$, and that sum is simply $(q-1)/2 + \epsilon$. Thus, we have computed the sum $\sum_{x\lt=n\ltx+q} \{N/n\}$ in time $O(\log n)$. QED

Now we show why the lemma is enough for attaining our goal (namely, computing $\sum_{n\leq \sqrt{N}} \lbrack N/n\rbrack$ with no error term). We know that

$\sum_{x\lt=n\lt=x+q} \lbrack N/n \rbrack = \sum_{x\lt=n\ltx+q} N/n - \sum_{x\lt=n\ltx+q} \{N/n\} = N*(\log(x+q)-\log(x)) - \sum_{x\lt=n\ltx+q} \{N/n\}.$

We also know that $\sum_{x\lt=n\lt=x+q} \lbrack N/n \rbrack$ is an integer. Thus, it is enough to compute $\sum_{x\lt=n\ltx+q} \{N/n\}$ with an error term <1/2 in order to compute $\sum_{x\lt=n\ltx+q} \lbrack N/n\rbrack$ exactly.

We now partition the range $n=\{N^\theta,\ldots,2 N^\theta\}, 1/3\lt=\theta\lt=1/2$, into intervals of the form $x\lt=n\ltx+q$, where q is the denominator of a good approximation to $N/x^2$, that is to say, an approximation of the form $a/q, \hbox{gcd}(a,q)=1, q\lt=Q$ with an error term $\lt= 1/\sqrt{5} q^2$. Such good approximations are provided to us by Hurwitz's approximation theorem. Moreover, it shouldn't be hard to show that, as x varies, the q's will be fairly evenly distributed in [1,Q]. (Since Hurwitz's approximation is either one of the ends of the interval containing $N/x^2$ in the Farey series with upper bound Q/2 or the new Farey fraction produced within that interval, it is enough to show that Dirichlet's more familiar approximations have fairly evenly distributed denominators.) This means that 1/q should be about $(\log Q)/Q$ in the average.

Thus, the number of intervals of the form $x\lt=n\ltx+q$ into which $\{N^\theta,\ldots ,2 N^\theta\}$ has been partitioned should be about $(\log Q) N / Q$. Since the contribution of each interval to the sum $\sum_{N^\theta\lt=n\lt=2N^\theta} \lfloor N/n\rfloor$ can (by Lemma 1 and the paragraph after its proof) be computed exactly in time $O(\log x)$, we can compute the entire sum in $\sum_{N^\theta n\lt= 2 N^\theta} \lfloor N/n\rfloor$ in time $O((\log x) (\log Q) N^\theta/Q) = O((\log N)^2 N^{1/3})$.

(There are bits of the sum (at the end and the beginning) that belong to two truncated intervals, but those can be computed in time $O(Q) \ll O(N^{1/6})$.)

We partition $\{1,2,\ldots ,\sqrt{N}\}$ into $O(\log N)$ intervals of the form $\{N^\theta,\ldots ,2 N^\theta\}$, and obtain a total running time of $O((\log N)^3 N^{1/3})$, as claimed.