# Distribution of primes in smooth moduli

### From Polymath1Wiki

(→Type I estimates) |
(→Combinations) |
||

Line 393: | Line 393: | ||

| [http://terrytao.wordpress.com/2013/06/30/bounded-gaps-between-primes-polymath8-a-progress-report/#comment-237087 details] | | [http://terrytao.wordpress.com/2013/06/30/bounded-gaps-between-primes-polymath8-a-progress-report/#comment-237087 details] | ||

| Type I / combinatorial border | | Type I / combinatorial border | ||

- | + | |- | |

+ | |Level 5 | ||

+ | |Level 1c | ||

+ | |Level 4 | ||

+ | |<math>\frac{600}{7} \varpi + \frac{180}{7} \delta < 1</math> | ||

+ | | [http://terrytao.wordpress.com/2013/07/07/the-distribution-of-primes-in-doubly-densely-divisible-moduli/#comment-239189 details] | ||

+ | | Type I / combinatorial border | ||

|} | |} | ||

For simplicity, only the constraint that is relevant for near-maximal values of <math>\varpi</math> is shown. | For simplicity, only the constraint that is relevant for near-maximal values of <math>\varpi</math> is shown. | ||

- | Here is some Maple code for finding the constraints coming from a certain set of inequalities (e.g. Type I level | + | Here is some Maple code for finding the constraints coming from a certain set of inequalities (e.g. Type I level 5, Type II level 1c, and Type III level 4). To reduce the complexity of the output, one can introduce an artificial cutoff of, say, <math>\varpi > 1/200</math>, in the base constraints to restrict attention to the regime of large values of <math>\varpi</math>. |

with(SolveTools[Inequality]); | with(SolveTools[Inequality]); | ||

Line 407: | Line 413: | ||

typeI_4 := [ 236 * varpi/3 + 64 * delta/3 + 4*sigma < 1]; | typeI_4 := [ 236 * varpi/3 + 64 * delta/3 + 4*sigma < 1]; | ||

typeI_6 := [ 56 * varpi + 16*delta + 4*sigma < 1]; | typeI_6 := [ 56 * varpi + 16*delta + 4*sigma < 1]; | ||

+ | typeI_5 := [ 160*varpi/3 + 16*delta + 34*sigma/9 < 1]; | ||

typeII_1 := [ 58 * varpi + 10 * delta < 1/2 ]; | typeII_1 := [ 58 * varpi + 10 * delta < 1/2 ]; | ||

typeII_1a := [48 * varpi + 7 * delta < 1/2 ]; | typeII_1a := [48 * varpi + 7 * delta < 1/2 ]; | ||

Line 415: | Line 422: | ||

typeIII_3 := [ 3/2 * (1/2 + sigma) > (7/4) * (1/2 + 2*varpi) + (3/8) * delta ]; | typeIII_3 := [ 3/2 * (1/2 + sigma) > (7/4) * (1/2 + 2*varpi) + (3/8) * delta ]; | ||

typeIII_4 := [ 1/4 + (3/4) * (3/2) * (1/2 + sigma) > (7/4) * (1/2 + 2*varpi) + (1/4) * delta ]; | typeIII_4 := [ 1/4 + (3/4) * (3/2) * (1/2 + sigma) > (7/4) * (1/2 + 2*varpi) + (1/4) * delta ]; | ||

- | constraints := [ op(base), op( | + | constraints := [ op(base), op(typeI_5), op(typeII_1c), op(typeIII_4) ]; |

LinearMultivariateSystem(constraints, [varpi,delta,sigma]); | LinearMultivariateSystem(constraints, [varpi,delta,sigma]); |

## Revision as of 17:20, 20 July 2013

A key input to Zhang's proof that bounded gaps occur infinitely often is a distribution result on primes in smooth moduli, which we have called (and later strengthened to ). These estimates are obtained as a combination of three other estimates, which we will call , , and .

## Contents |

## Definitions

### Asymptotic notation

*x* is a parameter going off to infinity, and all quantities may depend on *x* unless explicitly declared to be "fixed". The asymptotic notation is then defined relative to this parameter. A quantity *q* is said to be *of polynomial size* if one has *q* = *O*(*x*^{O(1)}), and *bounded* if *q* = *O*(1). We also write for , and for .

### Coefficient sequences

We need a fixed quantity *A*_{0} > 0.

A **coefficient sequence** is a finitely supported sequence that obeys the bounds

- If α is a coefficient sequence and is a primitive residue class, the (signed)
*discrepancy*of α in the sequence is defined to be the quantity

- A coefficient sequence α is said to be
*at scale*for some if it is supported on an interval of the form .*N*

- A coefficient sequence α at scale
*N*is said to*obey the Siegel-Walfisz theorem*if one has

for any , any fixed *A*, and any primitive residue class .

- A coefficient sequence α at scale
*N*is said to be*smooth*if it takes the form α(*n*) = ψ(*n*/*N*) for some smooth function supported on obeying the derivative bounds

for all fixed (note that the implied constant in the *O*() notation may depend on *j*).

### Congruence class systems

Let , and let denote the square-free numbers whose prime factors lie in *I*.

- A
*singleton congruence class system*on*I*is a collection of primitive residue classes for each </math>q \in {\mathcal S}_I</math>, obeying the Chinese remainder theorem property

whenever are coprime. We say that such a system has *controlled multiplicity* if the quantity

obeys the estimate

for any fixed *C* > 1 and any congruence class with . Here τ is the divisor function. [Actually, in the most recent proofs of Type I, II, and III estimates, the controlled multiplicity hypothesis is no longer needed, and so this definition is no longer relevant for the project.]

### Smooth and densely divisible numbers

A natural number *n* is said to be * y-smooth* if all of its prime factors are less than or equal to

*y*. We say that

*n*is

*if, for every , one can find a factor of*

*y*-densely divisible*n*in the interval [

*y*

^{ − 1}

*R*,

*R*]. Note that

*y*-smooth numbers are automatically

*y*-densely divisible, but the converse is not true in general. We say that

*n*is

*doubly*if, for every , one can find a factor of

*y*-densely divisible*n*in the interval [

*y*

^{ − 1}

*R*,

*R*] which is itself

*y*-densely divisible.

We let denote the space of *y*-densely divisible numbers, and the space of doubly densely divisible numbers, thus

- .

### MPZ and variants

Let and be fixed. Let Λ denote the von Mangoldt function.

- We say that the estimate holds if one has the estimate

for any fixed *A* > 0, any , and any congruence class system .

- We say that the estimate holds if one has the estimate

for any fixed *A* > 0, any , and any congruence class system .

- We say that the estimate holds if one has the estimate

for any fixed *A* > 0, any , and any congruence class system .

In early arguments an additional "controlled multiplicity" hypothesis was added to these assertions, but this hypothesis is no longer necessary.

### Type I, Type II, and Type III

Let , , and 0 < σ < 1 / 2 be fixed.

- We say that holds if, whenever
*M*,*N*are quantities with

and

or equivalently

for some fixed *c* > 0, and α,β are coefficient sequences at scale *M*,*N* respectively with β obeying a Siegel-Walfisz theorem, , and is a congruence class system, then one has

for all fixed *A* > 0.

- We say that holds if, whenever
*M*,*N*are quantities with

and

or equivalently

for some sufficiently small fixed *c* > 0, and α,β are coefficient sequences at scale *M*,*N* respectively with β obeying a Siegel-Walfisz theorem, , and is a congruence class system, then one has

for all fixed *A* > 0.

- We say that holds if, whenever
*M*,*N*_{1},*N*_{2},*N*_{3}are quantities with

α,ψ_{1},ψ_{2},ψ_{3} are coefficient sequences at scale *M*,*N*_{1},*N*_{2},*N*_{3} respectively with ψ_{1},ψ_{2},ψ_{3} smooth, , and is a congruence class system, then one has

for all fixed *A* > 0.

- We define , , analogously to , , but with the hypothesis replaced with , and replaced with . These estimates are slightly stronger than their unprimed counterparts.

- There is also a "double-primed" variant of these estimates, intermediate in strength between the primed and unprimed estimates, in which dense divisibility is replaced with "double dense divisibility" hypothesis.

## The combinatorial lemma

Combinatorial lemmaLet , , and 1 / 10 < σ < 1 / 2 be fixed.

- If , , and all hold, then holds.
- Similarly, if , , and all hold, then holds.
- Similarly, if , , and all hold, then holds.

This lemma is (somewhat implicitly) proven here. It reduces the verification of and to a comparison of the best available Type I, Type II, and Type III estimates, as well as the constraint σ > 1 / 10.

## Type I estimates

In all of the estimates below, , , and σ > 0 are fixed.

### Level 1

Type I-1We have (and hence ) whenever

- .

This result is implicitly proven here. (There, only is proven, but the method extends without difficulty to .) It uses the method of Zhang, and is ultimately based on exponential sums for incomplete Kloosterman sums on smooth moduli obtained via completion of sums.

### Level 2

Type I-2We have (and hence ) wheneverand

and

- .

This estimate is implicitly proven here. It improves upon the Level 1 estimate by using the q-van der Corput A-process in the *d*_{2} direction. The final constraint was removed in this comment.

### Level 3

Type I-3We have (and hence ) whenever

- .

This estimate is established here (it was previously tentatively established in this comment with an additional condition , which can now be dropped, thanks to an improved control on a secondary error term in the exponential sum estimates). It improves upon the Level 2 estimate by taking advantage of dense divisibility to optimise the direction of averaging.

### Level 4

By iterating the q-van der Corput A-process, it appears that one can obtain assuming a constraint of the form

but this is inferior to the Level 3 estimates in practice. Details can be found here.

### Level 6

Even further improvement in the Type I sums may be possible by rebalancing the final Cauchy-Schwarz: instead of performing Cauchy-Schwarz in *n*,*q*_{1} (leaving *h*,*q*_{2} to be doubled), factor *q*_{2} = *r*_{2}*s*_{2} and Cauchy-Schwarz in *n*,*q*_{1},*r*_{2} and only double *h*,*s*_{2}. The idea is to make the diagonal case *h**s*'_{2} = *h*'*s*_{2} do more of the work and the off-diagonal case do less of the work. This idea was first raised here. Preliminary computations suggest that this allows one to take for the Type I sums in *x*^{δ}-smooth case. In this post it is shown that the same bound holds in the densely divisible case, thus holds whenever

- .

### Level 5

(The numbering here is out of order because the Level 5 estimates proved harder to implement than the Level 6 estimates.)

Further improvement to these be obtainable by taking advantage of averaging in auxiliary parameters; in particular averaging over the parameter *d*_{1} has provisionally (subject to verification of some Deligne-level estimates) shown to establish whenever

- ;

see this comment.

## Type II estimates

In all of the estimates below, and are fixed.

### Level 1

Type II-1We have (and hence ) whenever

- .

This estimate is implicitly proven here. (There, only is proven, but the method extends without difficulty to .) It uses the method of Zhang, and is ultimately based on exponential sums for incomplete Kloosterman sums on smooth moduli obtained via completion of sums.

### Level 1a

Type II-1aWe have (and hence ) whenever

- .

This estimate is implicitly proven here. It is a slight refinement of the Level 1 estimate based on a more careful inspection of the error terms in the completion of sums method.

### Level 1b

Type II-1bWe have (and hence ) whenever

- .

This refinement of the Level 1a estimate came from realising that in the Type II case, the R parameter can be selected to lie in the range rather than . See this comment for details.

### Level 1c

Type II-1cWe have (and hence and ) whenever

- .

This further refinement of the Level 1b estimate came from realising that R can in fact range in if one strengthens the controlled multiplicity hypothesis slightly; see this comment or this post for details.

### Level 2

In analogy with the Type I-2 estimates, one could hope to improve the Type II estimates by using the q-van der Corput process in the *d*_{2} direction. Interestingly, however, it appears that the Type II numerology lies outside of the range in which the van der Corput process is beneficial (at least if one only applies it once), so the Level 2 estimate looks to be inferior to the Level 1b estimate.

### Level 3

In analogy with the Type I-3 estimates, one should be able to improve the Type II estimates by using the q-van der Corput process in an optimised direction. As with Level 2 estimates though, it appears that Level 3 estimates are inferior to the Level 1b estimate.

### Level 4

In analogy with the Type I-4 estimates, one should be able to improve the Type II estimates by iterating the q-van der Corput A-process.

### Level 5

In analogy with the Type I-5 estimates, one should be able to improve the Type II estimates by taking advantage of averaging in the h parameters.

### Level 6

Even further improvement in the Type II sums may be possible by rebalancing the final Cauchy-Schwarz: instead of performing Cauchy-Schwarz in *n* (leaving *h*,*q*_{1},*q*_{2} to be doubled), factor *q*_{1} = *r*_{1}*s*_{1} and Cauchy-Schwarz in *n*,*r*_{1} and only double *h*,*s*_{1},*q*_{2}. The idea is to make the diagonal case *h**s*'_{1}*q*'_{2} = *h*'*s*_{1}*q*_{2} do more of the work and the off-diagonal case do less of the work. This idea was first raised here.

## Type III estimates

In all of the estimates below, , , and σ > 0 are fixed.

### Level 1

Type III-1We have (and hence ) whenever

This estimate is implicitly proven here. (There, only is proven, but the method extends without difficulty to .) It uses the method of Zhang, using Weyl differencing and not exploiting the averaging in the α or *q* parameters. The constraint can also be written as a lower bound on σ:

- .

### Level 2

Type III-2We have (and hence ) whenever

This estimate is implicitly proven here. It is a refinement of the Level 1 estimate that takes advantage of the α averaging. The constraint may also be written as a lower bound on σ:

- .

### Level 3

Type III-3We have (and hence ) whenever

- .

This estimate is proven in this comment. It uses the newer method of Fouvry, Kowalski, Michel, and Nelson that avoids Weyl differencing. The constraint may also be written as a lower bound on σ:

- .

### Level 4

Type III-4We have (and hence and ) whenever

- .

This estimate is proven in this comment and then in this post. It modifies the Level 3 argument by exploiting averaging in the α parameter (this was suggested already by Fouvry, Kowalski, Michel, and Nelson).The constraint may also be written as a lower bound on σ:

- .

### Level 5

One may also hope to improve upon Level 4 estimates by exploiting Ramanujan sum cancellation (as Zhang did in his Level 1 argument).

### Level 6

An alternative way to improve upon Level 4 estimates would be to use the q-van der Corput process to bound incomplete Kloosterman correlations.

## Combinations

By combining a Type I estimate, a Type II estimate, and a Type III estimate together one can get estimates of the form or for small enough by using the combinatorial lemma. Here are the combinations that have been arisen so far in the Polymath8 project:

Type I | Type II | Type III | Result | Details | Where optimum is obtained |
---|---|---|---|---|---|

Level 1 | Level 1 | Level 1 | details | Type I / Type III border | |

Level 1 | Level 1 | Level 2 | details | Type I / Type III border | |

Level 2 | Level 1a | Level 1 | details | Type I / Type III border | |

Level 2 | Level 1a | Level 2 | details | Type I / Type III border | |

Level 3? | Level 1a | Level 2 | ? | details | Type I / Type III border |

Level 2 | Level 1a | Level 3 | details refinement | Type I / Type III border | |

Level 3 | Level 1a | Level 3 | details | Type I / Type III border | |

Level 3 | Level 1c | Level 4 | details | Type I / combinatorial border | |

Level 6 | Level 1c | Level 4 | details | Type I / combinatorial border | |

Level 5 | Level 1c | Level 4 | details | Type I / combinatorial border |

For simplicity, only the constraint that is relevant for near-maximal values of is shown.

Here is some Maple code for finding the constraints coming from a certain set of inequalities (e.g. Type I level 5, Type II level 1c, and Type III level 4). To reduce the complexity of the output, one can introduce an artificial cutoff of, say, , in the base constraints to restrict attention to the regime of large values of .

with(SolveTools[Inequality]); base := [ sigma > 1/10, sigma < 1/2, varpi > 0, varpi < 1/4, delta > 0, delta < 1/4+varpi ]; typeI_1 := [ 11 * varpi + 3 * delta + 2 * sigma < 1/4 ]; typeI_2 := [ 17 * varpi + 4 * delta + sigma < 1/4, 20 * varpi + 6 * delta + 3 * sigma < 1/2, 32 * varpi + 9 * delta + sigma < 1/2 ]; typeI_3 := [ 54 * varpi + 15 * delta + 5 * sigma < 1 ]; typeI_4 := [ 236 * varpi/3 + 64 * delta/3 + 4*sigma < 1]; typeI_6 := [ 56 * varpi + 16*delta + 4*sigma < 1]; typeI_5 := [ 160*varpi/3 + 16*delta + 34*sigma/9 < 1]; typeII_1 := [ 58 * varpi + 10 * delta < 1/2 ]; typeII_1a := [48 * varpi + 7 * delta < 1/2 ]; typeII_1b := [38 * varpi + 7 * delta < 1/2 ]; typeII_1c := [34 * varpi + 7 * delta < 1/2 ]; typeIII_1 := [ (13/2) * (1/2 + sigma) > 8 * (1/2 + 2*varpi) + delta ]; typeIII_2 := [ 1 + 5 * (1/2 + sigma) > 8 * (1/2 + 2*varpi) + delta ]; typeIII_3 := [ 3/2 * (1/2 + sigma) > (7/4) * (1/2 + 2*varpi) + (3/8) * delta ]; typeIII_4 := [ 1/4 + (3/4) * (3/2) * (1/2 + sigma) > (7/4) * (1/2 + 2*varpi) + (1/4) * delta ]; constraints := [ op(base), op(typeI_5), op(typeII_1c), op(typeIII_4) ]; LinearMultivariateSystem(constraints, [varpi,delta,sigma]);