Line-free sets correlate locally with complexity-1 sets: Difference between revisions
m →Step 2 |
|||
Line 13: | Line 13: | ||
'''Step 2.''' If <math>\mu</math> is some probability distribution on combinatorial subspaces of <math>[3]^n</math> such that the distribution of a point x chosen uniformly at random from a subspace chosen randomly according to the distribution <math>\mu</math> is approximately uniform, then we may assume that <math>\mu</math>-almost all subpaces <math>S\subset[3]^n</math> contain at least <math>(\delta-\eta)|S|</math> elements of <math>\mathcal{A}.</math> | '''Step 2.''' If <math>\mu</math> is some probability distribution on combinatorial subspaces of <math>[3]^n</math> such that the distribution of a point x chosen uniformly at random from a subspace chosen randomly according to the distribution <math>\mu</math> is approximately uniform, then we may assume that <math>\mu</math>-almost all subpaces <math>S\subset[3]^n</math> contain at least <math>(\delta-\eta)|S|</math> elements of <math>\mathcal{A}.</math> | ||
'''Step 3.''' By an averaging argument, we find <math>(U,V,W)</math> and <math>Z\subset U\cup V</math> with two properties. First, out of all pairs <math>(U',V')\in[2]^Z,</math> the proportion such that <math>(U,V,W)++(U',V',\emptyset)</math> belongs to <math>\mathcal{A}</math> is at least <math>\delta/2.</math> Secondly, out of all triples <math>(U',V',W')\in[3]^Z,</math> the proportion such that <math>(U,V,W)++(U',V',W')</math> belongs to <math>\mathcal{A}</math> is at least <math>\delta-\eta.</math> | '''Step 3.''' By an averaging argument, we find <math>(U,V,W)</math> and <math>Z\subset U\cup V</math> with two properties. First, out of all pairs <math>(U',V')\in[2]^Z,</math> the proportion such that <math>(U,V,W)++(U',V',\emptyset)</math> belongs to <math>\mathcal{A}</math> is at least <math>\delta/2.</math> Secondly, out of all triples <math>(U',V',W')\in[3]^Z,</math> the proportion such that <math>(U,V,W)++(U',V',W')</math> belongs to <math>\mathcal{A}</math> is at least <math>\delta-\eta.</math> We also choose <math>Z</math> to have size <math>o(\sqrt{n}).</math> | ||
'''Step 4.''' Fixing such (U,V,W) and Z, let us write (U',V',W') instead of (U,V,W)++(U',V',W'). Then if <math>U_1\subset U_2</math> and <math>(U_1,Z\setminus U_1,\emptyset)</math> and <math>(U_2,Z\setminus U_2,\emptyset)</math> both belong to <math>\mathcal{A},</math> then, writing <math>V_i</math> for <math>Z\setminus U_i,</math> we have that <math>(U_1,V_2,Z\setminus(U_1\cup V_2))</math> does not belong to <math>\mathcal{A}.</math> | '''Step 4.''' Fixing such (U,V,W) and Z, let us write (U',V',W') instead of (U,V,W)++(U',V',W'). Then if <math>U_1\subset U_2</math> and <math>(U_1,Z\setminus U_1,\emptyset)</math> and <math>(U_2,Z\setminus U_2,\emptyset)</math> both belong to <math>\mathcal{A},</math> then, writing <math>V_i</math> for <math>Z\setminus U_i,</math> we have that <math>(U_1,V_2,Z\setminus(U_1\cup V_2))</math> does not belong to <math>\mathcal{A}.</math> |
Revision as of 17:16, 24 February 2009
Warning: I think I can prove something rigorously, but will not be sure until it is completely written up. The writing up will continue when I have the time to do it.
The aim of this page is to present a proof that if [math]\displaystyle{ \mathcal{A} }[/math] is a dense subset of [math]\displaystyle{ [3]^n }[/math] that contains no combinatorial line, then there is a combinatorial subspace X of [math]\displaystyle{ \mathcal{A} }[/math] with dimension tending to infinity and a dense subset [math]\displaystyle{ \mathcal{B} }[/math] of X of complexity 1. It is written in a slightly unconventional way, with first a short sketch, then a longer one that fleshes out a few details, and then a longer one still. That way, even while it is incomplete it should be understandable to some extent, and if I get stuck then it will be clearer where the problem lies.
Short sketch of argument
Throughout this sketch, [math]\displaystyle{ \mathcal{A} }[/math] refers to a subset of [math]\displaystyle{ [3]^n }[/math] of density [math]\displaystyle{ \delta }[/math] in the uniform distribution on [math]\displaystyle{ [3]^n. }[/math] We shall sometimes use letters such as x, y and z for elements of [math]\displaystyle{ [3]^n }[/math] and we shall sometimes write them as triples (U,V,W) of sets that partition [n]. A triple of sets corresponds to the 1-set, the 2-set and the 3-set of a sequence. We shall pass freely between the two ways of thinking about [math]\displaystyle{ [3]^n, }[/math] at each stage using whichever is more convenient.
If (U,V,W) is an element of [math]\displaystyle{ [3]^n }[/math] and (U',V',W') is an arbitrary triple of disjoint sets (not necessarily partitioning [n]), we shall write (U,V,W)++(U',V',W') for the sequence obtained from (U,V,W) by changing everything in U' to 1, everything in V' to 2, and everything in W' to 3. For example, writing § for an unspecified coordinate, we have 331322311++§§§1§22§3=331122213. (We think of (U',V',W') as "overwriting" (U,V,W).) If Z is a subset of [n], we shall also write [math]\displaystyle{ (U,V,W)++[3]^Z }[/math] for the combinatorial subspace consisting of all [math]\displaystyle{ (U,V,W)++(U',V',W') }[/math] with [math]\displaystyle{ (U',V',W')\in[3]^Z. }[/math]
Step 1. If a, b and c are all within [math]\displaystyle{ C\sqrt n }[/math] of n/3 and a+b+c=n, and if r, s and t are three integers that add up to 0 and are all at most [math]\displaystyle{ m=o(\sqrt{n}) }[/math] in modulus, then the size of the slice [math]\displaystyle{ \Gamma_{a,b,c} }[/math] is 1+o(1) times the size of the slice [math]\displaystyle{ \Gamma_{a+r,b+s,c+t}. }[/math]
Step 2. If [math]\displaystyle{ \mu }[/math] is some probability distribution on combinatorial subspaces of [math]\displaystyle{ [3]^n }[/math] such that the distribution of a point x chosen uniformly at random from a subspace chosen randomly according to the distribution [math]\displaystyle{ \mu }[/math] is approximately uniform, then we may assume that [math]\displaystyle{ \mu }[/math]-almost all subpaces [math]\displaystyle{ S\subset[3]^n }[/math] contain at least [math]\displaystyle{ (\delta-\eta)|S| }[/math] elements of [math]\displaystyle{ \mathcal{A}. }[/math]
Step 3. By an averaging argument, we find [math]\displaystyle{ (U,V,W) }[/math] and [math]\displaystyle{ Z\subset U\cup V }[/math] with two properties. First, out of all pairs [math]\displaystyle{ (U',V')\in[2]^Z, }[/math] the proportion such that [math]\displaystyle{ (U,V,W)++(U',V',\emptyset) }[/math] belongs to [math]\displaystyle{ \mathcal{A} }[/math] is at least [math]\displaystyle{ \delta/2. }[/math] Secondly, out of all triples [math]\displaystyle{ (U',V',W')\in[3]^Z, }[/math] the proportion such that [math]\displaystyle{ (U,V,W)++(U',V',W') }[/math] belongs to [math]\displaystyle{ \mathcal{A} }[/math] is at least [math]\displaystyle{ \delta-\eta. }[/math] We also choose [math]\displaystyle{ Z }[/math] to have size [math]\displaystyle{ o(\sqrt{n}). }[/math]
Step 4. Fixing such (U,V,W) and Z, let us write (U',V',W') instead of (U,V,W)++(U',V',W'). Then if [math]\displaystyle{ U_1\subset U_2 }[/math] and [math]\displaystyle{ (U_1,Z\setminus U_1,\emptyset) }[/math] and [math]\displaystyle{ (U_2,Z\setminus U_2,\emptyset) }[/math] both belong to [math]\displaystyle{ \mathcal{A}, }[/math] then, writing [math]\displaystyle{ V_i }[/math] for [math]\displaystyle{ Z\setminus U_i, }[/math] we have that [math]\displaystyle{ (U_1,V_2,Z\setminus(U_1\cup V_2)) }[/math] does not belong to [math]\displaystyle{ \mathcal{A}. }[/math]
Step 5. Let [math]\displaystyle{ \mathcal{U} }[/math] be the set of all U such that [math]\displaystyle{ (U,Z\setminus U,\emptyset) }[/math] belongs to [math]\displaystyle{ \mathcal{A}, }[/math] and let [math]\displaystyle{ \mathcal{V}=\{Z\setminus U:U\in\mathcal{U}\}. }[/math] Then, in an appropriate sense, the set of all pairs [math]\displaystyle{ (U_1,V_2) }[/math] such that [math]\displaystyle{ U_1\in\mathcal{U} }[/math] and [math]\displaystyle{ V_2\in\mathcal{V} }[/math] is dense. It follows that [math]\displaystyle{ \mathcal{A} }[/math] is disjoint from a dense set of complexity 1.
Step 6. We can partition the set of all disjoint pairs [math]\displaystyle{ (U_1,V_2) }[/math] according to which of the sets [math]\displaystyle{ \mathcal{U}\times\mathcal{V}, }[/math] [math]\displaystyle{ \mathcal{U}\times\mathcal{V}^c, }[/math] [math]\displaystyle{ \mathcal{U}^c\times\mathcal{V} }[/math] or [math]\displaystyle{ \mathcal{U}^c\times\mathcal{V}^c }[/math] they belong to. There must be at least one of the three sets other than [math]\displaystyle{ \mathcal{U}\times\mathcal{V} }[/math] in which [math]\displaystyle{ \mathcal{A} }[/math] has a density increment. Thus, we have a local density increment on a set of complexity 1.
Further details
Step 1
This one is easy. First let us prove the comparable result in [math]\displaystyle{ [2]^n. }[/math] That is, let us prove that if a is within [math]\displaystyle{ O(\sqrt{n}) }[/math] of n/2 and [math]\displaystyle{ r=o(\sqrt{n}, }[/math] then [math]\displaystyle{ \binom na=(1+o(1))\binom n{a+r}. }[/math] This is because the ratio of [math]\displaystyle{ \binom nk }[/math] to [math]\displaystyle{ \binom n{k+1} }[/math] is (k+1)/(n-k), so if [math]\displaystyle{ k=n/2+O(\sqrt{n}), }[/math] then the ratio is [math]\displaystyle{ 1+O(n^{-1/2}). }[/math] If we now multiply [math]\displaystyle{ r=o(\sqrt{n}) }[/math] such ratios together we get [math]\displaystyle{ 1+o(1). }[/math]
To get from there to a comparable statement about the sizes of slices in [math]\displaystyle{ [3]^n, }[/math] note that we can get from [math]\displaystyle{ (a,b,c) }[/math] to [math]\displaystyle{ (a+r,b+s,c+t) }[/math] by two operations where we add [math]\displaystyle{ o(\sqrt n) }[/math] to one coordinate and subtract [math]\displaystyle{ o(\sqrt{n}) }[/math] from another. Each time we do so, we multiply by [math]\displaystyle{ 1+o(1), }[/math] by the result for [math]\displaystyle{ [2]^n }[/math] (but applied to [math]\displaystyle{ [2]^p }[/math] with p close to 2n/3).
Step 2
First let us make the statement more precise. Let us say that a probability distribution [math]\displaystyle{ \nu }[/math] on a finite set X is [math]\displaystyle{ \epsilon }[/math]-uniform if [math]\displaystyle{ \nu(A) }[/math] never differs from [math]\displaystyle{ |A|/|X| }[/math] by more than [math]\displaystyle{ \epsilon. }[/math] (A probabilist would say that the total variation distance between [math]\displaystyle{ \nu }[/math] and the uniform distribution is at most [math]\displaystyle{ \epsilon. }[/math]) Then the precise claim is the following. Let [math]\displaystyle{ \epsilon,\eta\gt 0. }[/math] Suppose that [math]\displaystyle{ \mu }[/math] is a probability distribution on some collection [math]\displaystyle{ \Sigma }[/math] of combinatorial subspaces of [math]\displaystyle{ [3]^n }[/math] such that the distribution [math]\displaystyle{ \nu }[/math] of a point x chosen uniformly at random from a subspace chosen randomly from [math]\displaystyle{ \Sigma }[/math] according to the distribution [math]\displaystyle{ \mu }[/math] is [math]\displaystyle{ \epsilon }[/math]-uniform. Then either we can find a combinatorial subspace [math]\displaystyle{ S\in\Sigma }[/math] such that [math]\displaystyle{ |\mathcal{A}\cap S|/|S|\geq\delta+\epsilon }[/math] or, when you choose S randomly according to the distribution [math]\displaystyle{ \mu, }[/math] the probability that [math]\displaystyle{ |\mathcal{A}\cap S|/|S|\leq\delta-\eta }[/math] is at most [math]\displaystyle{ 2\epsilon/\eta. }[/math]
Proof. Let us first work out a lower bound for the expectation of [math]\displaystyle{ \delta(S):=|\mathcal{A}\cap S|/|S|. }[/math] This expectation is [math]\displaystyle{ \sum_{S\in\Sigma}\mu(S)\delta(S), }[/math] which is precisely the probability that you obtain a point in [math]\displaystyle{ \mathcal{A} }[/math] if you first pick a random S and then pick a random point in S. In other words, it is [math]\displaystyle{ \nu(\mathcal{A}), }[/math] which by hypothesis is within [math]\displaystyle{ \epsilon }[/math] of [math]\displaystyle{ \delta, }[/math] and is therefore at least [math]\displaystyle{ \delta-\epsilon. }[/math] If the probability that [math]\displaystyle{ \delta(S)\lt \delta-\eta }[/math] is p and [math]\displaystyle{ \delta(S) }[/math] is bounded above by [math]\displaystyle{ \delta+\epsilon, }[/math] then the expectation of [math]\displaystyle{ \delta(S) }[/math] is at most [math]\displaystyle{ p(\delta-\eta)+(1-p)(\delta+\epsilon), }[/math] which equals [math]\displaystyle{ \delta+\epsilon-p(\eta+\epsilon). }[/math] If [math]\displaystyle{ p\gt 2\epsilon/\eta, }[/math] then this is less than [math]\displaystyle{ \delta+\epsilon-2\epsilon, }[/math] which is a contradiction. [math]\displaystyle{ \Box }[/math]
To be continued tomorrow.
[math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math]
Old stuff, probably to be junked
For convenience we shall use equal-slices measure but this is not fundamental to the argument.
The model of equal-slices measure we use is this. If p, q and r are non-negative real numbers with p+q+r=1, and [math]\displaystyle{ (X_1,\dots,X_n) }[/math] are independent random variables with probabilities p, q and r of equalling 1, 2 and 3, respectively, then we define [math]\displaystyle{ \mu_{p,q,r}(\mathcal{A}) }[/math] to be the probability that [math]\displaystyle{ (X_1,\dots,X_n) }[/math] lies in [math]\displaystyle{ \mathcal{A}. }[/math] We then define the density of [math]\displaystyle{ \mathcal{A} }[/math] to be the average of [math]\displaystyle{ \mu_{p,q,r}(\mathcal{A}) }[/math] over all possible triples p,q,r.
Now let us do some averaging. Let us write [math]\displaystyle{ \delta_{p,q,r} }[/math] for [math]\displaystyle{ \mu_{p,q,r}(\mathcal{A}). }[/math] Let us also use the notation (U,V,W) for the [math]\displaystyle{ x\in[3]^n }[/math] that has 1-set U, 2-set V and 3-set W.
First, we prove two similar lemmas that are very simple, but also rather useful.
Lemma 1. The probability distribution of (U,V,W) conditioned on W is the equal-slices measure of (U,V) with ground set [math]\displaystyle{ [n]\setminus W. }[/math]
Proof. We are asking for the distribution of the random variable [math]\displaystyle{ (X_1,\dots,X_n) }[/math] when we condition on the event that [math]\displaystyle{ W_i=3 }[/math] for every [math]\displaystyle{ i\in W. }[/math] Let us condition further on the value of r. Then for each fixed p, q such that p+q=1-r, and each [math]\displaystyle{ i\notin W, }[/math] we have that [math]\displaystyle{ X_i=1 }[/math] with probability p/(1-r) and [math]\displaystyle{ X_i=2 }[/math] with probability q/(1-r). When we average over p and q, the numbers p/(1-r) and q/(1-r) are uniformly distributed over pairs of positive reals that add up to 1. For each r, we therefore obtain precisely the equal-slices probability distribution on the random variables [math]\displaystyle{ X_i }[/math]with [math]\displaystyle{ i\notin W, }[/math] so the same is true when we average over r.[math]\displaystyle{ \Box }[/math]
It is obviously not the case that the set W in a random triple (U,V,W) is distributed according to equal-slices measure: rather, we choose r with density 2(1-r) and then choose elements of W independently with probability r. When we refer to a random set W or discuss probabilities of events associated with W, it will be this measure that we refer to. (In other words, we take the marginal distribution on W, just as we should.)
To be continued, but possibly not for a while as I have a lot to do in the near future.
[math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math]