# Ergodic-inspired methods

These methods are inspired by the Furstenberg-Katznelson argument and the ergodic perspective.

## Idea #1: extreme localisation

Let $A \subset [3]^n$ be line-free with density $\delta$. Let $m = m(\delta)$ be a medium size integer independent of n. We embed $[3]^m$ inside $[3]^n$ to create a random set $A_m \subset [3]^m$ which enjoys stationarity properties. We then look at the events $E_{i,j}$ for $1 \leq i \leq j \leq m$, which are the event that $1^i 0^{j-i} 2^{m-j}$ lies in $A_m$. As A is line-free, we observe that $E_{i,i}, E_{i,j}, E_{j,j}$ cannot simultaneously occur for any $1 \leq i \lt j \leq m$. Also, each of the $E_{i,j}$ have probability about $\delta$.

On the other hand, by the first moment method, many of the $E_{i,i}$ hold with positive probability. Some Cauchy-Schwarz then tells us that there exists $1 \leq i \lt i' \lt j \lt j' \leq n$ such that $E_{i,j} \wedge E_{i',j} \wedge E_{i,j'} \wedge E_{i',j'}$ has probability significantly larger than $\delta^4$.

One can view the events $E_{i,j}$ as an i+m-j-uniform hypergraph, by fixing a base point x and viewing the random subspace $[3]^m$ as formed by modifying x on m random indices. The above correlation would mean some significant irregularity in this hypergraph; the hope is that this implies some sort of usable structure on A that can be used, for instance to locate a density increment.

## Idea #2: IP Roth first

McCutcheon.508 (revised 2-17): I will give my general idea for a proof. I’m pretty sure it’s sound, though it may not be feasible in practice. On the other hand I may be badly mistaken about something. I will throw it out there for someone else to attempt, or say why it’s nonsense, or perhaps ignore. I won’t formulate it as a strategy to prove DHJ, but of what I’ve called IP Roth. If successful, one could possibly adapt it to the DHJ, k=3 situation, but there would be complications that would obscure what was going on.

We work in $X=[n]^{[n]}\times [n]^{[n]}.$ For a real valued function $f$ defined on $X$, define $||f||_1=(\mathrm{IP-lim}_a\mathrm{IP-lim}_b {1\over |X|}\sum_{(x,y)\in X} f((x,y))f((x+a,y))f((x+b,y-b))f((x+a+b,y-b)))^{1/4},$

$||f||_2=(\mathrm{IP-lim}_a\mathrm{IP-lim}_b {1\over |X|}\sum_{(x,y)\in X} f((x,y))f((x,y+a))f((x+b,y-b))f((x+b,y+a-b)))^{1/4}.$ Now, let me explain what this means. $a$ and $b$ are subsets of $[n]$, and we identify $a$ with the characteristic function of $a$, which is a member of $[n]^{[n]}.$ (That is how we can add $a$ to $x$ inside, etc. Since $[n]$ is a finite set, you can’t really take limits, but if $n$ is large, we can do something almost as good, namely ensure that whenever $\max\alpha\lt\min\beta$, the expression we are taking the limit of is close to something (Milliken Taylor ensures this, I think). Of course, you have to restrict $a$ and $b$ to a subspace. What is a subspace? You take a sequence $a_i$ of subsets of $[n]$ with $\max a_i\lt\min a_{i+1}$ and then restrict to unions of the $a_i.$

Now here is the idea. Take a subset $E$ of $X$ and let $f$ be its balanced indicator function. You first want to show that if either of the above-defined 2-norms of $f$ is small, then $E$ contains about the right number of corners $\{ (x,y), (x+a,y), (x,y+a)\}.$ Restricted to the subspace of course. What does that mean? Well, you treat each of the $a_i$ as a single coordinate, moving them together. The other coordinates I’m not sure about. Maybe you can just fix them in the right way and have the norm that was small summing over all of $X$ still come out small. At any rate, the real trick is to show that if both coordinate 2-norms are big, you get a density increment on a subspace. Here a subspace surely means that you find some $a_i$s, treat them as single coordinates, and fix the values on the other coordinates. (If the analogy with Shkredov's proof of the Szemeredi corners theorem holds, you probably only need for one of these norms to be big....)