Talk:Outline of first paper

From Polymath Wiki
Jump to navigationJump to search

Discussion from the blog (reverse chronological order)

(O'Donnell) I have collected up a small amount of free time and was planning to put some time into the writing-up. I agree that it will be nice to fix the overall outline before starting. Here are some more comments:

. Since I will likely never finish it, I put a link to my old outline of the proof on the wiki’s home page. I kind of like that outline too :) although of course it’s quite similar to Tim’s.

. Changed “author” from “Polymath” to “A. Polymath” (as Terry had suggested a while ago). Reasons: a) Automatic indexing systems probably like authors to have both a “first name” and a “last name”. This will help the author list to get sorted under “P”. b) The next projects could be authored by “B. Polymath”, “C. Polymath”, “ZZ. Polymath”, etc. This would also help get all projects sorted under “P”, yet differentiate the author groups. c) Subtle allusion to A. Nilli. :)

. Address: perhaps could change to the URL of the wiki. (Perhaps we should get a stabler one?) I suppose we’ll need a corresponding address for the paper-journal version though…

. I agree with Terry’s votes for [k] = {1, …, k} and A rather than \mathcal{A}.

. I’ll admit it, I always find the terminology Grassmannian “scary”. If it were me I would just try to refer to “d-dimensional subspaces”. Not sure we need a name for the set :)

. Naming the distributions: I think the phrase “equal slices” is unbelievably catchy, and this is cute. On the other hand, it’s hard to deny the officialness of “random ordered 3-partition". I mean, such terms are in Chapter 1 of Stanley :) Either becomes a bit of a mouthful when you get to its sister distribution: “equal nondegenerate slices” or “ordered weak 3-partition” (”weak” = “parts of size 0 allowed”, apparently). So I don’t know…

. For the letter, who knows? I’m kind of against \mu as it sounds like the uniform distribution to me. \nu is a nice letter for measures. Or maybe \lambda (for “line”) or \pi (for “partition”)…

. For the last question, I know Varnavides’s name gets attached to this notion, but I must say it is the opposite of evocative for a nonexpert like me. It actually took me a good few hundred blog posts before I understood that “Varnavides = contains random lines w. positive prob.” Guided by Tim’s spirit of trying to make the paper as easy-to-read as possible, how about…

“DHJ(3) = Nonnegligible sets in [3]^n contain a line.” Then: “Nonnegligible sets in [3]^n contain a high-dimensional subspace.” “Nonnegligible sets in [3]^n contain a nonnegligible fraction of lines.” “Nonnegligible sets in [3]^n contain a nonnegligible fraction of high-dimensional subspaces.”

Wordy, perhaps, but I think it gets the idea across straight away. Also, using the same word “nonnegligible” subtly gets across the key fact that we’re actually talking about the same distribution..

Or does this just sound too weird to an arithmetic combinatorialist’s ears?


(Tao) I guess, on balance, that [k]={1,…,k} looks slightly nicer than [k]={0,…,k-1}; the 0-based notation is slightly more “logical”, but we don’t seem to derive any substantial benefit from it. So I’m weakly in favour of {1,…,k}.

We can borrow from geometry and use the notation Gr( [k]^n, d ) to denote the d-dimensional subspaces of [k]^n (a “combinatorial Grassmanian”). I don’t know what to call the measures on this space though. Does every measure \mu on [k]^n canonically define a measure on Gr( [k]^n, d )? It seems to me that one needs some additional parameters to specify such a measure.

“A” versus “{\mathcal A}” - I would prefer A, as I want to think of a subset of [k]^n as a set of points, rather than a collection of sets (which is what the {\mathcal A} notation suggests to me). The one problem with using A is that we are also likely to be using A for subsets of [n]^2 in the corners theorem, and if we are going to discuss the reduction of corners to DHJ then there might be a very slight notational clash there. But perhaps this is actually a good thing, since we want to use the corners argument to motivate the DHJ one…

As for the last question, what about “Hales-Jewett property” for containing lines, “subspace Hales-Jewett property” for containing subspaces, and “subspace Hales-Jewett-Varnavides property” for containing subspaces with positive probability? (thus, e.g. “dense ij-insensitive sets obey the subspace Hales-Jewett-Varnavides property”)?


(O'Donnell) One question: It looks like you’ve divided the proof into three main lemmas: multidim-Sperner (more generally, multidim-DHJ(k-1)), line-free set correlating with intersections of ij-insensitive sets, and ij-insensitive sets being partitionable.

It seems to me that the Varnavides-version of multidim-Sperner (more generally, multidim-DHJ(k-1)) may as well be considered the basic lemma. Where will this go?

Putting it into \subsection{The multidimensional Sperner theorem} makes sense to me, although then the actual \section{A proof of the theorem for $k=3$.} might be quite short. On the other hand, if it goes into the proof section itself, then the multidim-Sperner therem subsection will be awfully short (might as well just quote Gunderson-Rodl-Sidorenko).

The latter seems less modular to me, so I guess what I’m ultimately suggesting is that \subsection{The multidimensional Sperner theorem} be more like \subsection{The Varnavides multidimensional Sperner theorem}.

Except that I strongly vote for using a more generic descriptor than “Varnavides”. There’s got to be a catchy word that indicates to the reader that not only do dense sets contain lines, a random line is in there with positive probability.

Also, I’m still of two minds as to whether “Equal-Slices” should be treated as the main distribution, with Polya as a slight variant, or vice versa.