# Difference between revisions of "Finding narrow admissible tuples"

For any natural number $k_0$, an admissible $k_0$-tuple is a finite set ${\mathcal H}$ of integers of cardinality $k_0$ which avoids at least one residue class modulo $p$ for each prime $p$. (Note that one only needs to check those primes $p$ of size at most $k_0$, so this is a finitely checkable condition.) Let $H(k_0)$ denote the minimal diameter $\max {\mathcal H} - \min {\mathcal H}$ of an admissible $k_0$-tuple. As part of the Polymath8 project, we would like to find as good an upper bound on $H(k_0)$ as possible for given values of $k_0$. To a lesser extent, we would also be interested in lower bounds on this quantity. There is some scattered numerical evidence that the optimal value of H is roughly of size $k_0 \log k_0 + k_0$ for $k_0$ in the range of interest.

## Upper bounds

Upper bounds are primarily constructed through various "sieves" that delete one residue class modulo $p$ from an interval for a lot of primes $p$. Examples of sieves, in roughly increasing order of efficiency, are listed below.

### Zhang sieve

The Zhang sieve uses the tuple

${\mathcal H} = \{p_{m+1}, \ldots, p_{m+k_0}\}$

where $m$ is taken to optimize the diameter $p_{m+k_0}-p_{m+1}$ while staying admissible (in practice, this basically means making $m$ as small as possible). Certainly any $m$ with $p_{m+1} \gt k_0$ works (in particular, one can just take ${\mathcal H}$ to be the first $k_0$ primes past $k_0$, but this is not optimal. Applying the prime number theorem then gives the upper bound $H \leq (1+o(1)) k_0\log k_0$.

### Hensley-Richards sieve

The Hensley-Richards sieve [HR1973], [HR1973b], [R1974] uses the tuple

${\mathcal H} = \{-p_{m+\lfloor k_0/2\rfloor - 1}, \ldots, -p_{m+1}, -1, +1, p_{m+1},\ldots, p_{m+\lfloor k_0/2+1/2\rfloor-1}\}$

where m is again optimised to minimize the diameter while staying admissible.

### Asymmetric Hensley-Richards sieve

The asymmetric Hensley-Richard sieve uses the tuple

${\mathcal H} = \{-p_{m+\lfloor k_0/2\rfloor - 1-i}, \ldots, -p_{m+1}, -1, +1, p_{m+1},\ldots, p_{m+\lfloor k_0/2+1/2\rfloor-1+i}\}$

where $i$ is an integer and $i,m$ are optimised to minimize the diameter while staying admissible.

### Schinzel sieve

Given $0\lty\ltz$, the Schinzel sieve (discussed in [HR1973], [CJ2001] first sieves by $1\bmod p$ for primes $p \le y$ and by $0\bmod p$ for primes $y \lt p \le z$. For a given choice of $y$, the parameter $z$ is minimized subject to ensuring that the first $k_0$ survivors (after the first) form an admissible sequence $\mathcal{H}$, so the only free parameter is $y$, which is chosen to minimize the diameter of $\mathcal{H}$. The case $y=1$ corresponds to a sieve of Eratosthenes, which will typically yield the same sequence as Zhang with the minimal (but not necessarily optimal) value of $m$ that yields an admissible $k_0$-tuple. As originally proposed, the Schinzel sieve works over the positive integers, but one can apply the sieve to any given interval, and as with the Hensley-Richards sieve, it is generally better to use an asymmetric interval (which need not contain the origin).

### Greedy sieve

Within a given interval, one sieves a single residue class $a \bmod p$ for increasing primes $p=2,3,5,\ldots$, with $a$ chosen to maximize the number of survivors. Ties can be broken in a number of ways: minimize $a\in[0,p-1]$, maximize $a\in [0,p-1]$, minimize $|a-\lfloor p/2\rfloor|$, or randomly. If not all residue classes modulo $p$ are occupied by survivors, then $a$ will be chosen so that no survivors are sieved. This necessarily occurs once $p$ exceeds the number of survivors but typically happens much sooner. One then chooses the narrowest $k_0$-tuple ${\mathcal H}$ among the survivors (if there are fewer than $k_0$ survivors, retry with a wider interval).

### Greedy-Schinzel sieve

Heuristically, the performance of the greedy sieve is significantly improved by starting with a Schinzel sieve with $y=2$ and $z=\sqrt{x_1-x_0}$ and then continuing in a greedy fashion This method was proposed by Sutherland and originally referred to as a "greedy-greedy" approach. This nomenclature arose from the fact that one optimization that can be applied to the standard Schinzel sieve on a given interval is to "greedily" avoid sieving modulo primes where the set of survivors is already admissible (this may occur for primes less than the minimal value of $z$ that yields $k_0$-survivors), while a second optimization is to use a value of $z$ that is intentionally smaller than necessary and switch to greedy sieving for primes greater than $z$. With the choice $z=\sqrt{x_1-x_0}$, unless the initial interval is much larger than necessary, all primes up to $z$ will require a residue class to be sieved and the first "greedy" seldom applies.

### Seeded greedy sieve

Given an initial sequence ${\mathcal S}$ that is known to contain an admissible $k_0$-tuple, one can apply greedy sieving to the minimal interval containing ${\mathcal S}$ until an admissible sequence of survivors remains, and then choose the narrowest $k_0$=tuple it contains. The sieving methods above can be viewed as the special case where ${\mathcal S}$ is the set of integers in some interval. The main difference is that the choice of ${\mathcal S}$ affects when ties occur and how they are broken with greedy sieving. One approach is to take ${\mathcal S}$ to be the union of two $k_0$-tuples that lie in roughly the same interval (see Iterated merging) below.

### Iterated merging

Given an admissible $k_0$-tuple $\mathcal{H}_1$, one can attempt to improve it using an iterated merging approach suggested by Castryck. One first uses a greedy (or greedy-Schinzel) sieve to construct an admissible $k_0$-tuple $\mathcal{H}_2$ in roughly the same interval as $\mathcal{H}_1$, then performs a randomized greedy sieve using the seed set $\mathcal{S} = \mathcal{H}_1 \cup \mathcal{H}_2$ to obtain an admissible $k_0$-tuple $\mathcal{H}_3$. If $\mathcal{H}_3$ is narrower than $\mathcal{H}_2$, replace $\mathcal{H}_2$ with $\mathcal{H}_3$, otherwise try again with a new $\mathcal{H}_3$. Eventually the diameter of $\mathcal{H}_2$ will become less than or equal to that of $\mathcal{H}_1$. As long as $\mathcal{H}_1\ne \mathcal{H}_2$, one can continue to attempt to improve $\mathcal{H}_2$, but in practice one stops after some number of retries.

As described by Sutherland, one can then replace $\mathcal{H}_1$ with $\mathcal{H}_2$ and begin the process anew, yielding a randomized algorithm that can be run indefinitely. Key parameters to this algorithm are the choice of the interval used when constructing $\mathcal{H}_2$, which is typically made wider than the minimal interval containing $\mathcal{H}_1$ by a small factor $\delta$ on each side (Sutherland suggests $\delta = 0.0025$), and the number of failed attempts allowed while attempting to impove $\mathcal{H}_2$.

Eventually this process will tend to converge to particular $\mathcal{H}_1$ that it cannot improve (or more generally, a set of similar $\mathcal{H}_1$'s with the same diameter). Interleaving iterated merging with the local optimizations described below often allows the algorithm to make further progress.

Iterated merging can be viewed as a form of simulated annealing. The set $\mathcal{S}$ initially contains at least two admissible $k_0$-tuples (typically many more), and as the algorithm proceeds the set $\mathcal{S}$ converges toward $\mathcal{H}_1$ and the number of admissible $k_0$-tuples it contains declines. One can regard the cardinality of the difference between $\mathcal{S}$ and $\mathcal{H}_1$ as a measure of the "temperature" of a gradually cooling system, since the number of choices available to the algorithm declines as this cardinality is reduced (more precisely, one may consider the entropy of the possible sequence of tie-breaking choices available for a given $\mathcal{S}$).

### Local optimizations

Let $\mathcal H = \{h_1,\ldots, h_{k_0}\}$ be an admissible $k_0$-tuple with endpoints $h_1$ and $h_{k_0}$, and let $\mathcal I$ be the interval $[h_1,h_{k_0}]$. If there exists an integer $h\in\mathcal I$ such that removing one of $\mathcal H$'s endpoints and inserting $h$ yields an admissible $k_0$-tuple $\mathcal H'$, then call $\mathcal H$ contractible, and if not, say that $\mathcal H$ non-contractible. Note that $\mathcal H'$ necessarily has smaller diameter than $\mathcal H$. Any of the sieving methods described above may produce admissible $k_0$-tuples that are contractible, so it is worth testing for contractibility as a post-processing step after sieving and replacing $\mathcal H$ by $\mathcal H'$ if this test succeeds.

We can also shift $\mathcal H$ to the left by removing its right end point $h_{k_0}$ and replacing it with the greatest integer $h_0 \lt h_1$ that yields an admissible $k_0$-tuple $\mathcal H'$, and we can similarly shift $\mathcal H$ to the right. The diameter of $\mathcal H'$ need not be less than $\mathcal H$, but if it is, it provides a useful replacement. More generally, by shifting $\mathcal H$ repeatedly we can produce a sequence of admissible $k_0$-tuples that lie successively further to the left or right. In general the diameter of these tuples may grow as we do so, but it will also occasionally decline, and we may be able to find a shifted $\mathcal H'$ with smaller diameter than $\mathcal H$.

A more sophisticated local optimization involves a process of adjustment" proposed by Savitt. Let $\mathcal H$ be an admissible $k_0$-tuple. For a prime $p$ and an integer $a$, let $[a;p]$ denote the residue class $a\bmod p$, i.e. the set of integers $\{ x : x = a \bmod p\}$. Call $[a;p]$ occupied if it contains an element of $\mathcal H$.

Suppose that $[a;p]$ and $[b;q]$ are occupied residue classes, for some distinct primes $p$ and $q$, and that $[a';p]$ and $[b';q]$ are unoccupied. Let $\mathcal U$ be the intersection of $\mathcal H$ with $[a;p] \cup [b;q]$, and let $\mathcal V$ be a subset of the integers that lie in the intersection of the interval $I$ containing $H$ and the set $[a';p] \cup [b';q]$ such that the set $\mathcal H'$ formed by removing the elements of $\mathcal U$ from $\mathcal H$ and adding the elements of $\mathcal V$ is admissible. A necessary (and often sufficient) condition for and integer $v$ to lie in $\mathcal V$ is that $v$ must not lie in a residue class $[c;r]$ that is the unique unoccupied residue class modulo $r$ for any prime $r$ other than $p$ or $q$.

The admissible set $\mathcal H'$ lies in the interval $\mathcal I$ containing $\mathcal H$, so its diameter is no greater than that of $\mathcal H$, however its cardinality may differ. If it happens that $\mathcal H'$ contains more elements than $\mathcal H$, then by eliminating points at either end of $\mathcal H'$ we obtain an admissible $k_0$-tuple that is narrower than $\mathcal H$ and may adjust" $\mathcal H$ by replacing it with $\mathcal H'$. The process of adjustment can often be applied repeatedly, yielding a sequence of successively narrower admissible $k_0$-tuples.

## Lower bounds

There is a substantial amount of literature on bounding the quantity $\pi(x+y)-\pi(x)$, the number of primes in a shifted interval $[x+1,x+y]$, where $x,y$ are natural numbers. As a general rule, whenever a bound of the form

$\pi(x+y) - \pi(x) \leq F(y)$ (*)

is established for some function $F(y)$ of $y$, the method of proof also gives a bound of the form

$k_0 \leq F( H(k_0)+1 ).$ (**)

Indeed, if one assumes the prime tuples conjecture, any admissible $k_0$-tuple of diameter $H$ can be translated into an interval of the form $[x+1,x+H+1]$ for some $x$. In the opposite direction, all known bounds of the form (*) proceed by using the fact that for $x\gty$, the set of primes between $x+1$ and $x+y$ is admissible, so the method of proof of (*) invariably also gives (**) as well.

Examples of lower bounds are as follows;

### Brun-Titchmarsh inequality

The Brun-Titchmarsh theorem gives

$\pi(x+y) - \pi(x) \leq (1 + o(1)) \frac{2y}{\log y}$

which then gives the lower bound

$H(k_0) \geq (\frac{1}{2}-o(1)) k_0 \log k_0$.

Montgomery and Vaughan deleted the o(1) error from the Brun-Titchmarsh theorem [MV1973, Corollary 2], giving the more precise inequality

$k_0 \leq 2 \frac{H(k_0)+1}{\log (H(k_0)+1)}.$

### First Montgomery-Vaughan large sieve inequality

The first Montgomery-Vaughan large sieve inequality [MV1973, Theorem 1] gives

$k_0 (\sum_{q \leq Q} \frac{\mu^2(q)}{\phi(q)}) \leq H(k_0)+1 + Q^2$

for any $Q \gt 1$, which is a parameter that one can optimise over (the optimal value is comparable to $H(k_0)^{1/2}$).

### Second Montgomery-Vaughan large sieve inequality

The second Montgomery-Vaughan large sieve inequality [MV1973, Corollary 1] gives

$k_0 \leq (\sum_{q \leq z} (H(k_0)+1+cqz)^{-1} \mu(q)^2 \prod_{p|q} \frac{1}{p-1})^{-1}$

for any $z \gt 1$, which is a parameter similar to $Q$ in the previous inequality, and $c$ is an absolute constant. In the original paper of Montgomery and Vaughan, $c$ was taken to be $3/2$; this was then reduced to $\sqrt{22}/\pi$ [B1995, p.162] and then to $3.2/\pi$ [M1978]. It is conjectured that $c$ can be taken to in fact be $1$.

## Benchmarks

Efforts to fill in the blank fields in this table are very welcome.

$k_0$ 3,500,000 181,000 34,429 26,024 23,283 22,949 10,719 5,000 4,000 3,000 2,000 1,000 672
Upper bounds
First $k_0$ primes past $k_0$ 59,874,594 2,530,338 420,878 310,134 275,082 270,698 117,714 50,840 39,660 28,972 18,386 8,424 5,406
Zhang sieve 59,093,364 2,486,370 411,932 303,558 268,536 264,414 114,806 49,578 38,596 28,008 17,766 8,212 5,216
Hensley-Richards sieve 57,554,086 2,422,558 402,790 297,454 262,794 258,780 112,868 48,634 38,498 27,806 17,726 8,258 5,314
Asymmetric Hensley-Richards 2,418,054 401,700 296,154 262,286 258,302 112,562 48,484 37,932 27,638 17,676 8,168 5,220
Schinzel sieve 2,413,228 400,512 295,162 262,206 258,000 112,440 48,726 38,168 27,632 17,616 8,160 5,196
greedy-Schinzel sieve 2,326,476 388,076 286,308 253,968 249,992 108,694 46,968 36,756 26,754 17,054 7,854 5,030
Best known tuple 57,554,086 2,326,476 386,532 285,210 252,804 248,910 108,462 46,824 36,636* 26,622 16,984* 7,808* 5,010*
Engelsma data - - - - - - - - 36,622 26,622 16,978 7,802 4,998
Predictions
$k_0 \log k_0 + k_0$ 56,238,957 2,372,232 394,096 290,604 257,405 253,381 110,119 47,586 37,176 27,019 17,202 7,907 5,046
Lower bounds
MV with $c=1$ (conjectural) 234,642 172,924 153,691 151,298 66,314
MV with $c=3.2/\pi$ 234,322 172,719 153,447 151,056 66,211
MV with $c=\sqrt{22}/\pi$ 227,078 167,860 148,719 146,393 63,917
Second Montgomery-Vaughan 226,987 167,793 148,656 146,338 63,886
Brun-Titchmarsh 30,137,225 1,272,083 211,046 155,555 137,756 58,863 25,351 19,785 14,358 9,118 4,167 2,648
First Montgomery-Vaughan 196,729

196,719

145,711

145,461

128,971 55,149 24,012 18,768 13,696 8,448 3,959 2,558

* indicates that the widths listed are the best known tuples that have been found by the methods that gave the entries for larger values of $k_0$, but are not as narrow as the literally best known tuples (due to Engelsma).

The Schinzel tuples were generated with $y=2$ using an optimally chosen interval (the interval is not in every case guaranteed to be optimal, particularly for larger values of $k_0$, but it is believed to be so).

The greedy-Schinzel tuples were generated by breaking ties downward in every case, as in Sutherland's original greedy-greedy algorithm (and the optimal interval was selected on this basis). As noted by Castryck, breaking ties upward may produce better results in some cases. As with the Schinzel tuples, the chosen intervals are not guaranteed to be optimal but are believed to be so.