(Difference between revisions)
 Revision as of 14:48, 14 June 2013 (view source) (→Benchmarks)← Older edit Revision as of 16:40, 14 June 2013 (view source)m (→Zhang sieve)Newer edit → Line 11: Line 11: :${\mathcal H} = \{p_{m+1}, \ldots, p_{m+k_0}\}$ :${\mathcal H} = \{p_{m+1}, \ldots, p_{m+k_0}\}$ - where $m$ is taken to optimize the diameter $p_{m+k_0}-p_{m+1}$ while staying admissible (in practice, this basically means making $m$ as small as possible).  Certainly any $m$ with $p_{m+1} > k_0$ works (in particular, one can just take ${\mathcal H}$ to be the first $k_0$ primes past $k_0$, but this is not optimal.  Applying the prime number theorem then gives the upper bound $H \leq (1+o(1)) k_0\log k_0$. + where $m$ is taken to optimize the diameter $p_{m+k_0}-p_{m+1}$ while staying admissible (in practice, this basically means making $m$ as small as possible).  Certainly any $m$ with $p_{m+1} > k_0$ works; in particular, one can just take ${\mathcal H}$ to be the first $k_0$ primes past $k_0$, but this is not optimal.  Applying the prime number theorem then gives the upper bound $H \leq (1+o(1)) k_0\log k_0$. === Hensley-Richards sieve === === Hensley-Richards sieve ===

## Revision as of 16:40, 14 June 2013

For any natural number k0, an admissible k0-tuple is a finite set ${\mathcal H}$ of integers of cardinality k0 which avoids at least one residue class modulo p for each prime p. (Note that one only needs to check those primes p of size at most k0, so this is a finitely checkable condition.) Let H(k0) denote the minimal diameter $\max {\mathcal H} - \min {\mathcal H}$ of an admissible k0-tuple. As part of the Polymath8 project, we would like to find as good an upper bound on H(k0) as possible for given values of k0. To a lesser extent, we would also be interested in lower bounds on this quantity. There is some scattered numerical evidence that the optimal value of H is roughly of size k0logk0 + k0 for k0 in the range of interest.

## Upper bounds

Upper bounds are primarily constructed through various "sieves" that delete one residue class modulo p from an interval for a lot of primes p. Examples of sieves, in roughly increasing order of efficiency, are listed below.

### Zhang sieve

The Zhang sieve uses the tuple

${\mathcal H} = \{p_{m+1}, \ldots, p_{m+k_0}\}$

where m is taken to optimize the diameter $p_{m+k_0}-p_{m+1}$ while staying admissible (in practice, this basically means making m as small as possible). Certainly any m with pm + 1 > k0 works; in particular, one can just take ${\mathcal H}$ to be the first k0 primes past k0, but this is not optimal. Applying the prime number theorem then gives the upper bound $H \leq (1+o(1)) k_0\log k_0$.

### Hensley-Richards sieve

The Hensley-Richards sieve [HR1973], [HR1973b], [R1974] uses the tuple

${\mathcal H} = \{-p_{m+\lfloor k_0/2\rfloor - 1}, \ldots, -p_{m+1}, -1, +1, p_{m+1},\ldots, p_{m+\lfloor k_0/2+1/2\rfloor-1}\}$

where m is again optimised to minimize the diameter while staying admissible.

### Asymmetric Hensley-Richards sieve

The asymmetric Hensley-Richard sieve uses the tuple

${\mathcal H} = \{-p_{m+\lfloor k_0/2\rfloor - 1-i}, \ldots, -p_{m+1}, -1, +1, p_{m+1},\ldots, p_{m+\lfloor k_0/2+1/2\rfloor-1+i}\}$

where i is an integer and i,m are optimised to minimize the diameter while staying admissible.

### Schinzel sieve

Given 0 < y < z < x, the Schinzel sieve (discussed in [S1961], [HR1973], [GR1998], [CJ2001]) sieves the interval [1,x] by 1mod p for primes $p \le y$ and by 0mod p for primes $y < p \le z$. Provided that z is large enough (z = k0 clearly suffices), the first k0 survivors form an admissible k0-tuple (but not necessarily the narrowest one in the interval). The case y = 1 corresponds to a sieve of Eratosthenes; if one minimizes z and takes the first k0 survivors greater than 1, this yields the same admissible k0 tuple as Zhang, with the minimal possible value of m.

### Shifted Schinzel sieve

As a generalization of the Schinzel sieve, one may instead sieve shifted intervals [s,s + x]. This is effectively equivalent to sieving the interval [0,x] of the residue classes $-s\ \bmod\ p$ for primes $p\le y$ and $1-s\ \bmod\ p$ for primes $y.

### Greedy sieve

Within a given interval, one sieves a single residue class amod p for increasing primes $p=2,3,5,\ldots$, with a chosen to maximize the number of survivors. Ties can be broken in a number of ways: minimize $a\in[0,p-1]$, maximize $a\in [0,p-1]$, minimize $|a-\lfloor p/2\rfloor|$, or randomly. If not all residue classes modulo p are occupied by survivors, then a will be chosen so that no survivors are sieved. This necessarily occurs once p exceeds the number of survivors but typically happens much sooner. One then chooses the narrowest k0-tuple ${\mathcal H}$ among the survivors (if there are fewer than k0 survivors, retry with a wider interval).

### Greedy-greedy sieve

Heuristically, the performance of the greedy sieve is significantly improved by starting with a shifted Schinzel sieve on $[s,\ s+x]$ using y = 2 and $z = \sqrt{x}$ and then continuing in a greedy fashion, as proposed by Sutherland. One first optimizes the shift value s over some larger interval (e.g. $[-k_0\log\ k_0,\ k_0\log\ k_0]$) and then continues the sieving over primes p > z greedily choosing the best residue class for each prime according to a chosen tie-breaking rule (in Sutherland's original implementation, ties are broken downward in $[0,\ p-1]$).

### Seeded greedy sieve

Given an initial sequence ${\mathcal S}$ that is known to contain an admissible k0-tuple, one can apply greedy sieving to the minimal interval containing ${\mathcal S}$ until an admissible sequence of survivors remains, and then choose the narrowest k0=tuple it contains. The sieving methods above can be viewed as the special case where ${\mathcal S}$ is the set of integers in some interval. The main difference is that the choice of ${\mathcal S}$ affects when ties occur and how they are broken with greedy sieving. One approach is to take ${\mathcal S}$ to be the union of two k0-tuples that lie in roughly the same interval (see Iterated merging) below.

### Iterated merging

Given an admissible k0-tuple $\mathcal{H}_1$, one can attempt to improve it using an iterated merging approach suggested by Castryck. One first uses a greedy (or greedy-Schinzel) sieve to construct an admissible k0-tuple $\mathcal{H}_2$ in roughly the same interval as $\mathcal{H}_1$, then performs a randomized greedy sieve using the seed set $\mathcal{S} = \mathcal{H}_1 \cup \mathcal{H}_2$ to obtain an admissible k0-tuple $\mathcal{H}_3$. If $\mathcal{H}_3$ is narrower than $\mathcal{H}_2$, replace $\mathcal{H}_2$ with $\mathcal{H}_3$, otherwise try again with a new $\mathcal{H}_3$. Eventually the diameter of $\mathcal{H}_2$ will become less than or equal to that of $\mathcal{H}_1$. As long as $\mathcal{H}_1\ne \mathcal{H}_2$, one can continue to attempt to improve $\mathcal{H}_2$, but in practice one stops after some number of retries.

As described by Sutherland, one can then replace $\mathcal{H}_1$ with $\mathcal{H}_2$ and begin the process anew, yielding a randomized algorithm that can be run indefinitely. Key parameters to this algorithm are the choice of the interval used when constructing $\mathcal{H}_2$, which is typically made wider than the minimal interval containing $\mathcal{H}_1$ by a small factor δ on each side (Sutherland suggests δ = 0.0025), and the number of failed attempts allowed while attempting to impove $\mathcal{H}_2$.

Eventually this process will tend to converge to particular $\mathcal{H}_1$ that it cannot improve (or more generally, a set of similar $\mathcal{H}_1$'s with the same diameter). Interleaving iterated merging with the local optimizations described below often allows the algorithm to make further progress.

Iterated merging can be viewed as a form of simulated annealing. The set $\mathcal{S}$ initially contains at least two admissible k0-tuples (typically many more), and as the algorithm proceeds the set $\mathcal{S}$ converges toward $\mathcal{H}_1$ and the number of admissible k0-tuples it contains declines. One can regard the cardinality of the difference between $\mathcal{S}$ and $\mathcal{H}_1$ as a measure of the "temperature" of a gradually cooling system, since the number of choices available to the algorithm declines as this cardinality is reduced (more precisely, one may consider the entropy of the possible sequence of tie-breaking choices available for a given $\mathcal{S}$).

### Local optimizations

Let $\mathcal H = \{h_1,\ldots, h_{k_0}\}$ be an admissible k0-tuple with endpoints h1 and $h_{k_0}$, and let $\mathcal I$ be the interval $[h_1,h_{k_0}]$. If there exists an integer $h\in\mathcal I$ such that removing one of $\mathcal H$'s endpoints and inserting h yields an admissible k0-tuple $\mathcal H'$, then call $\mathcal H$ contractible, and if not, say that $\mathcal H$ non-contractible. Note that $\mathcal H'$ necessarily has smaller diameter than $\mathcal H$. Any of the sieving methods described above may produce admissible k0-tuples that are contractible, so it is worth testing for contractibility as a post-processing step after sieving and replacing $\mathcal H$ by $\mathcal H'$ if this test succeeds.

We can also shift $\mathcal H$ to the left by removing its right end point $h_{k_0}$ and replacing it with the greatest integer h0 < h1 that yields an admissible k0-tuple $\mathcal H'$, and we can similarly shift $\mathcal H$ to the right. The diameter of $\mathcal H'$ need not be less than $\mathcal H$, but if it is, it provides a useful replacement. More generally, by shifting $\mathcal H$ repeatedly we can produce a sequence of admissible k0-tuples that lie successively further to the left or right. In general the diameter of these tuples may grow as we do so, but it will also occasionally decline, and we may be able to find a shifted $\mathcal H'$ with smaller diameter than $\mathcal H$.

A more sophisticated local optimization involves a process of adjustment" proposed by Savitt. Let $\mathcal H$ be an admissible k0-tuple. For a prime p and an integer a, let [a;p] denote the residue class amod p, i.e. the set of integers {x:x = amod p}. Call [a;p] occupied if it contains an element of $\mathcal H$.

Suppose that [a;p] and [b;q] are occupied residue classes, for some distinct primes p and q, and that [a';p] and [b';q] are unoccupied. Let $\mathcal U$ be the intersection of $\mathcal H$ with $[a;p] \cup [b;q]$, and let $\mathcal V$ be a subset of the integers that lie in the intersection of the interval I containing H and the set $[a';p] \cup [b';q]$ such that the set $\mathcal H'$ formed by removing the elements of $\mathcal U$ from $\mathcal H$ and adding the elements of $\mathcal V$ is admissible. A necessary (and often sufficient) condition for and integer v to lie in $\mathcal V$ is that v must not lie in a residue class [c;r] that is the unique unoccupied residue class modulo r for any prime r other than p or q.

The admissible set $\mathcal H'$ lies in the interval $\mathcal I$ containing $\mathcal H$, so its diameter is no greater than that of $\mathcal H$, however its cardinality may differ. If it happens that $\mathcal H'$ contains more elements than $\mathcal H$, then by eliminating points at either end of $\mathcal H'$ we obtain an admissible k0-tuple that is narrower than $\mathcal H$ and may adjust" $\mathcal H$ by replacing it with $\mathcal H'$. The process of adjustment can often be applied repeatedly, yielding a sequence of successively narrower admissible k0-tuples.

## Lower bounds

There is a substantial amount of literature on bounding the quantity π(x + y) − π(x), the number of primes in a shifted interval [x + 1,x + y], where x,y are natural numbers. As a general rule, whenever a bound of the form

$\pi(x+y) - \pi(x) \leq F(y)$ (*)

is established for some function F(y) of y, the method of proof also gives a bound of the form

$k_0 \leq F( H(k_0)+1 ).$ (**)

Indeed, if one assumes the prime tuples conjecture, any admissible k0-tuple of diameter H can be translated into an interval of the form [x + 1,x + H + 1] for some x. In the opposite direction, all known bounds of the form (*) proceed by using the fact that for x > y, the set of primes between x + 1 and x + y is admissible, so the method of proof of (*) invariably also gives (**) as well.

Examples of lower bounds are as follows;

### Brun-Titchmarsh inequality

The Brun-Titchmarsh theorem gives

$\pi(x+y) - \pi(x) \leq (1 + o(1)) \frac{2y}{\log y}$

which then gives the lower bound

$H(k_0) \geq (\frac{1}{2}-o(1)) k_0 \log k_0$.

Montgomery and Vaughan deleted the o(1) error from the Brun-Titchmarsh theorem [MV1973, Corollary 2], giving the more precise inequality

$k_0 \leq 2 \frac{H(k_0)+1}{\log (H(k_0)+1)}.$

### First Montgomery-Vaughan large sieve inequality

The first Montgomery-Vaughan large sieve inequality [MV1973, Theorem 1] gives

$k_0 (\sum_{q \leq Q} \frac{\mu^2(q)}{\phi(q)}) \leq H(k_0)+1 + Q^2$

for any Q > 1, which is a parameter that one can optimise over (the optimal value is comparable to H(k0)1 / 2).

### Second Montgomery-Vaughan large sieve inequality

The second Montgomery-Vaughan large sieve inequality [MV1973, Corollary 1] gives

$k_0 \leq (\sum_{q \leq z} (H(k_0)+1+cqz)^{-1} \mu(q)^2 \prod_{p|q} \frac{1}{p-1})^{-1}$

for any z > 1, which is a parameter similar to Q in the previous inequality, and c is an absolute constant. In the original paper of Montgomery and Vaughan, c was taken to be 3 / 2; this was then reduced to $\sqrt{22}/\pi$ [B1995, p.162] and then to 3.2 / π [M1978]. It is conjectured that c can be taken to in fact be 1.

## Benchmarks

Efforts to fill in the blank fields in this table are very welcome.

k03,500,000 181,000 34,429 26,024 23,283 22,949 10,719 5,000 4,000 3,000 2,000 1,000 672
Upper bounds
First k0 primes past k0 59,874,594 2,530,338 420,878 310,134 275,082 270,698 117,714 50,840 39,660 28,972 18,386 8,424 5,406
Zhang sieve 59,093,364 2,486,370 411,932 303,558 268,536 264,414 114,806 49,578 38,596 28,008 17,766 8,212 5,216
Hensley-Richards sieve 57,554,086 2,422,558 402,790 297,454 262,794 258,780 112,868 48,634 38,498 27,806 17,726 8,258 5,314
Asymmetric Hensley-Richards 2,418,054 401,700 296,154 262,286 258,302 112,562 48,484 37,932 27,638 17,676 8,168 5,220
Shifted Schinzel sieve 2,413,228 400,512 295,162 262,206 258,000 112,440 48,726 38,168 27,632 17,616 8,160 5,196
Greedy-greedy sieve 2,326,476 388,076 286,308 253,968 249,992 108,694 46,968 36,756 26,754 17,054 7,854 5,030
Best known tuple 57,554,086 2,326,476 386,532 285,210 252,804 248,910 108,462 46,824 36,636* 26,610 16,984* 7,806* 5,010*
Engelsma data - - - - - - - - 36,622 26,622 16,978 7,802 4,998
Predictions
k0logk0 + k0 56,238,957 2,372,232 394,096 290,604 257,405 253,381 110,119 47,586 37,176 27,019 17,202 7,907 5,046
Lower bounds
MV with c = 1 (conjectural) 234,872 173,420 153,691 151,298 66,314 28,781 22,564 16,456 10,500 4,858 3,124
MV with c = 3.2 / π 234,529 173,140 153,447 151,056 66,211 28,737 22,523 16,428 10,480 4,847 3,118
MV with $c=\sqrt{22}/\pi$ 227,078 167,860 148,719 146,393 63,917 27,708 21,701 15,758 10,061 4,648 2,979
Second Montgomery-Vaughan 226,987 167,793 148,656 146,338 63,886 27,696 21,690 15,751 10,056 4,645 2,977
Brun-Titchmarsh 30,137,225 1,272,083 211,046 155,555 137,756 135,599 58,863 25,351 19,785 14,358 9,118 4,167 2,648
First Montgomery-Vaughan 196,729

196,719

145,711

145,461

128,971 55,149 24,012 18,768 13,696 8,448 3,959 2,558

* indicates that the widths listed are the best known tuples that have been found by the methods that gave the entries for larger values of k0, but are not as narrow as the literally best known tuples (due to Engelsma).

The shifted Schinzel tuples were generated with y = 2 using an optimally chosen interval contained in [ − k0logk0,k0logk0] (the interval is not in every case guaranteed to be optimal, particularly for larger values of k0, but it is believed to be so).

The greedy-greedy tuples were generated using Sutherland's original algorithm, breaking ties downward in every case (and the optimal interval in [ − k0logk0,k0logk0] was selected on this basis). As noted by Castryck, breaking ties upward may produce better results in some cases.