Line-free sets correlate locally with complexity-1 sets
Warning: I think I can prove something rigorously, but will not be sure until it is completely written up. The writing up will continue when I have the time to do it.
The aim of this page is to present a proof that if [math]\displaystyle{ \mathcal{A} }[/math] is a dense subset of [math]\displaystyle{ [3]^n }[/math] that contains no combinatorial line, then there is a combinatorial subspace X of [math]\displaystyle{ \mathcal{A} }[/math] with dimension tending to infinity and a dense subset [math]\displaystyle{ \mathcal{B} }[/math] of X of complexity 1. For convenience we shall use equal-slices measure but this is not fundamental to the argument.
The model of equal-slices measure we use is this. If p, q and r are non-negative real numbers with p+q+r=1, and [math]\displaystyle{ (X_1,\dots,X_n) }[/math] are independent random variables with probabilities p, q and r of equalling 1, 2 and 3, respectively, then we define [math]\displaystyle{ \mu_{p,q,r}(\mathcal{A}) }[/math] to be the probability that [math]\displaystyle{ (X_1,\dots,X_n) }[/math] lies in [math]\displaystyle{ \mathcal{A}. }[/math] We then define the density of [math]\displaystyle{ \mathcal{A} }[/math] to be the average of [math]\displaystyle{ \mu_{p,q,r}(\mathcal{A}) }[/math] over all possible triples p,q,r.
Now let us do some averaging. Let us write [math]\displaystyle{ \delta_{p,q,r} }[/math] for [math]\displaystyle{ \mu_{p,q,r}(\mathcal{A}). }[/math] Let us also use the notation (U,V,W) for the [math]\displaystyle{ x\in[3]^n }[/math] that has 1-set U, 2-set V and 3-set W.
First, we prove two similar lemmas that are very simple, but also rather useful.
Lemma 1. The probability distribution of (U,V,W) conditioned on W is the equal-slices measure of (U,V) with ground set [math]\displaystyle{ [n]\setminus W. }[/math]
Proof. We are asking for the distribution of the random variable [math]\displaystyle{ (X_1,\dots,X_n) }[/math] when we condition on the event that [math]\displaystyle{ W_i=3 }[/math] for every [math]\displaystyle{ i\in W. }[/math] Let us condition further on the value of r. Then for each fixed p, q such that p+q=1-r, and each [math]\displaystyle{ i\notin W, }[/math] we have that [math]\displaystyle{ X_i=1 }[/math] with probability p/(1-r) and [math]\displaystyle{ X_i=2 }[/math] with probability q/(1-r). When we average over p and q, the numbers p/(1-r) and q/(1-r) are uniformly distributed over pairs of positive reals that add up to 1. For each r, we therefore obtain precisely the equal-slices probability distribution on the random variables [math]\displaystyle{ X_i }[/math]with [math]\displaystyle{ i\notin W, }[/math] so the same is true when we average over r.[math]\displaystyle{ \Box }[/math]
It is obviously not the case that the set W in a random triple (U,V,W) is distributed according to equal-slices measure: rather, we choose r with density 2(1-r) and then choose elements of W independently with probability r. When we refer to a random set W or discuss probabilities of events associated with W, it will be this measure that we refer to. (In other words, we take the marginal distribution on W, just as we should.)
To be continued, but possibly not for a while as I have a lot to do in the near future.
[math]\displaystyle{ }[/math][math]\displaystyle{ }[/math][math]\displaystyle{ }[/math]