Ergodic-inspired methods: Difference between revisions
No edit summary |
|||
Line 1: | Line 1: | ||
These methods are inspired by the [[Furstenberg-Katznelson argument]] and the [[ergodic perspective]]. | These methods are inspired by the [[Furstenberg-Katznelson argument]] and the [[ergodic perspective]]. | ||
= | IhNw92 <a href="http://fxmeaeosqhwx.com/">fxmeaeosqhwx</a>, [url=http://qwpzirkqlzpp.com/]qwpzirkqlzpp[/url], [link=http://wsqbyfmebhit.com/]wsqbyfmebhit[/link], http://qibakfodtupu.com/ | ||
== Idea #2: IP Roth first == | == Idea #2: IP Roth first == |
Revision as of 19:48, 29 April 2009
These methods are inspired by the Furstenberg-Katznelson argument and the ergodic perspective.
IhNw92 <a href="http://fxmeaeosqhwx.com/">fxmeaeosqhwx</a>, [url=http://qwpzirkqlzpp.com/]qwpzirkqlzpp[/url], [link=http://wsqbyfmebhit.com/]wsqbyfmebhit[/link], http://qibakfodtupu.com/
Idea #2: IP Roth first
McCutcheon.508 (revised 2-17): I will give my general idea for a proof. I’m pretty sure it’s sound, though it may not be feasible in practice. On the other hand I may be badly mistaken about something. I will throw it out there for someone else to attempt, or say why it’s nonsense, or perhaps ignore. I won’t formulate it as a strategy to prove DHJ, but of what I’ve called IP Roth. If successful, one could possibly adapt it to the DHJ, k=3 situation, but there would be complications that would obscure what was going on.
We work in [math]\displaystyle{ X=[n]^{[n]}\times [n]^{[n]}. }[/math] For a real valued function [math]\displaystyle{ f }[/math] defined on [math]\displaystyle{ X }[/math], define [math]\displaystyle{ ||f||_1=(\mathrm{IP-lim}_a\mathrm{IP-lim}_b {1\over |X|}\sum_{(x,y)\in X} f((x,y))f((x+a,y))f((x+b,y-b))f((x+a+b,y-b)))^{1/4}, }[/math]
[math]\displaystyle{ ||f||_2=(\mathrm{IP-lim}_a\mathrm{IP-lim}_b {1\over |X|}\sum_{(x,y)\in X} f((x,y))f((x,y+a))f((x+b,y-b))f((x+b,y+a-b)))^{1/4}. }[/math] Now, let me explain what this means. [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] are subsets of [math]\displaystyle{ [n] }[/math], and we identify [math]\displaystyle{ a }[/math] with the characteristic function of [math]\displaystyle{ a }[/math], which is a member of [math]\displaystyle{ [n]^{[n]}. }[/math] (That is how we can add [math]\displaystyle{ a }[/math] to [math]\displaystyle{ x }[/math] inside, etc. Since [math]\displaystyle{ [n] }[/math] is a finite set, you can’t really take limits, but if [math]\displaystyle{ n }[/math] is large, we can do something almost as good, namely ensure that whenever [math]\displaystyle{ \max\alpha\lt \min\beta }[/math], the expression we are taking the limit of is close to something (Milliken Taylor ensures this, I think). Of course, you have to restrict [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] to a subspace. What is a subspace? You take a sequence [math]\displaystyle{ a_i }[/math] of subsets of [math]\displaystyle{ [n] }[/math] with [math]\displaystyle{ \max a_i\lt \min a_{i+1} }[/math] and then restrict to unions of the [math]\displaystyle{ a_i. }[/math]
Now here is the idea. Take a subset [math]\displaystyle{ E }[/math] of [math]\displaystyle{ X }[/math] and let [math]\displaystyle{ f }[/math] be its balanced indicator function. You first want to show that if either of the above-defined 2-norms of [math]\displaystyle{ f }[/math] is small, then [math]\displaystyle{ E }[/math] contains about the right number of corners [math]\displaystyle{ \{ (x,y), (x+a,y), (x,y+a)\}. }[/math] Restricted to the subspace of course. What does that mean? Well, you treat each of the [math]\displaystyle{ a_i }[/math] as a single coordinate, moving them together. The other coordinates I’m not sure about. Maybe you can just fix them in the right way and have the norm that was small summing over all of [math]\displaystyle{ X }[/math] still come out small. At any rate, the real trick is to show that if both coordinate 2-norms are big, you get a density increment on a subspace. Here a subspace surely means that you find some [math]\displaystyle{ a_i }[/math]s, treat them as single coordinates, and fix the values on the other coordinates. (If the analogy with Shkredov's proof of the Szemeredi corners theorem holds, you probably only need for one of these norms to be big....)