Roth's Theorem concerning discrepancy on Arithmetic Progression: Difference between revisions
New page: = Proof of <math>\gg n^{1/8}</math> We break the proof down into 9 easy pieces. * First, we consider APs modulo <math>n</math> for a large <math>n</math>. At the end, we'll have proven t... |
No edit summary |
||
Line 1: | Line 1: | ||
= | Letting <math>\scriptstyle \mathcal{AP}(N) </math> be the collection of all (not necessarily homogeneous) arithmetic progressions in <math>\{1, 2, ..., N\}</math>, Roth (1964) proved that <math> \scriptstyle \delta(\mathcal{AP}(N),{\mathbf x}) \geq c n^{1/4}</math>, independent of the sequence <math>\scriptstyle {\mathbf x}</math>. | ||
*Roth, K. F., Remark concerning integer sequences. Acta Arith. 9 1964 257--260. MR0168545 (29 #5806) | |||
In 2000, László Lovász gave another proof using semidefinite optimization. | |||
*Lovász, László, Integer sequences and semidefinite programming. Dedicated to Professor Kálmán Gyory on the occasion of his 60th birthday. | |||
Publ. Math. Debrecen 56 (2000), no. 3-4, 475--479. MR1765994 (2001k:11020) | |||
Matoušek and Spencer (1996) have proven that the exponent <math> 1/4 </math> is best possible, although their construction is not constructive. | |||
*Matoušek, Jirí; Spencer, Joel, [http://dx.doi.org/10.1090/S0894-0347-96-00175-0 Discrepancy in arithmetic progressions]. J. Amer. Math. Soc. 9 (1996), no. 1, 195--204. MR1311824 (96c:11089) | |||
== ``Representation of the identity'' proof of <math>\scriptstyle \gg n^{1/8}</math> == | |||
This is an elaboration of the outline in [http://gowers.wordpress.com/2010/03/13/edp12-representing-diagonal-maps/#comment-6764 this post]. | |||
We break the proof down into 9 easy pieces. | We break the proof down into 9 easy pieces. | ||
* First, we consider APs modulo <math>n</math> for a large <math>n</math>. At the end, we'll have proven the existence of an AP with diameter smaller than n, so that if it wraps around it does so only once, and so at least half of the discrepancy is on a non-wrap-around AP. Set <math>\omega=e^{2\pi i /n}</math>. Let <math>{\mathbf x}</math> be a <math>\pm1</math> column vector (our coloring!), and let <math>\delta</math> be the maximum discrepancy of <math>{\mathbf x}</math> along any AP. Summations are over all of <math>{\mathbb Z}_n</math> unless noted otherwise, or obviously otherwise. | * First, we consider APs modulo <math>n</math> for a large <math>n</math>. At the end, we'll have proven the existence of an AP with diameter smaller than n, so that if it wraps around it does so only once, and so at least half of the discrepancy is on a non-wrap-around AP. Set <math>\omega=e^{2\pi i /n}</math>. Let <math> \scriptstyle {\mathbf x}</math> be a <math> \scriptstyle \pm1</math> column vector (our coloring!), and let <math>\delta</math> be the maximum discrepancy of <math> \scriptstyle {\mathbf x}</math> along any AP. Summations are over all of <math>{\mathbb Z}_n</math> unless noted otherwise, or obviously otherwise. | ||
* Let <math>P_i</math> be the indicator vector of the <math>i</math>-th AP. Suppose that we have some <math>\lambda_{i,j}</math> with | * Let <math>P_i</math> be the indicator vector of the <math>i</math>-th AP. Suppose that we have some <math>\lambda_{i,j}</math> with | ||
Line 9: | Line 23: | ||
:<math>I = \sum_{i,j} \lambda_{i,j} P_i P_j^T,</math> | :<math>I = \sum_{i,j} \lambda_{i,j} P_i P_j^T,</math> | ||
where <math>I</math> is the <math>n\times n</math> identity matrix. Then | where <math>I</math> is the <math> \scriptstyle n\times n</math> identity matrix. Then | ||
:<math> | :<math> | ||
n={\mathbf x}^T {\mathbf x} = {\mathbf x}^T I {\mathbf x} ={\mathbf x}^T\left(\sum \lambda_{i,j} P_i P_j^T\right) {\mathbf x} | n={\mathbf x}^T {\mathbf x} = {\mathbf x}^T I {\mathbf x} ={\mathbf x}^T\left(\sum \lambda_{i,j} P_i P_j^T\right) {\mathbf x} | ||
Line 16: | Line 30: | ||
</math> | </math> | ||
So, if <math>\sum |\lambda_{i,j}| = o(n)</math>, necessarily <math>\delta\to\infty</math>. This simplifies the problem considerably: can the identity matrix be written in the form <math>\sum \lambda_{i,j} P_i P_j^T</math> with small <math>\lambda_{i,j}</math>? We no longer need concern ourselves with the vector <math>{\mathbf x}</math>. | So, if <math>\sum |\lambda_{i,j}| = o(n)</math>, necessarily <math>\delta\to\infty</math>. This simplifies the problem considerably: can the identity matrix be written in the form <math>\sum \lambda_{i,j} P_i P_j^T</math> with small <math>\lambda_{i,j}</math>? We no longer need concern ourselves with the vector <math> \scriptstyle {\mathbf x}</math>. | ||
* Set <math>P_{d,a}</math> to be the indicator vector of the AP with length <math>2m+1</math> (with <math>m</math> to be determined later), difference <math>d>1</math>, and center <math>a</math>. Set <math>\omega_r(t)=\omega^{rt}</math>, a column vector whose <math>t</math>-th component is <math>\omega^{rt}</math>. Note that | * Set <math>P_{d,a}</math> to be the indicator vector of the AP with length <math>2m+1</math> (with <math>m</math> to be determined later), difference <math>d>1</math>, and center <math>a</math>. Set <math>\omega_r(t)=\omega^{rt}</math>, a column vector whose <math>t</math>-th component is <math>\omega^{rt}</math>. Note that | ||
:<math>I = \frac1n \sum_r \omega_r \overline{\omega_r}^T.</math> | :<math>I = \frac1n \sum_r \omega_r \overline{\omega_r}^T.</math> | ||
Line 26: | Line 40: | ||
* We express a sum in two ways: | * We express a sum in two ways: | ||
:<math>\sum_s P_{d,0}(s) \omega^{r(t-s)} = \omega^{rt} \sum_s P_{d,0}\omega^{-rs}=\omega^{rt} \hat{P}_{d,0}(-r)</math> | :<math>\sum_s P_{d,0}(s) \omega^{r(t-s)} = \omega^{rt} \sum_s P_{d,0}\omega^{-rs}=\omega^{rt} \hat{P}_{d,0}(-r)</math> | ||
and | and | ||
:<math>\sum_s P_{d,0}(s) \omega^{r(t-s)} = \sum_s P_{d,0}(t-s) \omega^{rs}=\sum_s \omega^{rs}P_{d,s}(t)</math> | :<math>\sum_s P_{d,0}(s) \omega^{r(t-s)} = \sum_s P_{d,0}(t-s) \omega^{rs}=\sum_s \omega^{rs}P_{d,s}(t)</math> | ||
Thus, | Thus, | ||
:<math>\omega_r = \frac{1}{\hat{P}_{d,0}(-r)} \sum_s \omega^{rs} P_{d,s},</math> | :<math>\omega_r = \frac{1}{\hat{P}_{d,0}(-r)} \sum_s \omega^{rs} P_{d,s},</math> | ||
the promised sum-of-APs representation of <math>\omega_r</math>, and consequently the promised representation of <math>I</math> as a sum of <math>P_{d,s_1} P_{d,s_2}^T</math>. All that remains is to analyze the coefficients of the representation of <math>I</math>. | the promised sum-of-APs representation of <math>\omega_r</math>, and consequently the promised representation of <math>I</math> as a sum of <math>P_{d,s_1} P_{d,s_2}^T</math>. All that remains is to analyze the coefficients of the representation of <math>I</math>. | ||
Line 34: | Line 48: | ||
* As <math>\hat{P}_{d,0}(-r)=\sum_s P_{d,0}\omega^{-rs}</math>, and <math>P_{d,0}</math> is symmetric about 0, we see that <math>\hat{P}_{d,0}(-r)=\hat{P}_{d,0}(r)</math> is real. Moreover, as it is a geometric series, it is routine to find | * As <math>\hat{P}_{d,0}(-r)=\sum_s P_{d,0}\omega^{-rs}</math>, and <math>P_{d,0}</math> is symmetric about 0, we see that <math>\hat{P}_{d,0}(-r)=\hat{P}_{d,0}(r)</math> is real. Moreover, as it is a geometric series, it is routine to find | ||
:<math>\alpha_r:=\hat{P}_{d,0}(-r)=\frac{\sin\left((2m+1)\pi rd/n\right)}{\sin\left(\pi rd/n\right)}</math> | :<math>\alpha_r:=\hat{P}_{d,0}(-r)=\frac{\sin\left((2m+1)\pi rd/n\right)}{\sin\left(\pi rd/n\right)}</math> | ||
We will want <math>\alpha_r</math> to be bounded away from 0. To that end for each <math>r</math>, fix now and forever <math>d</math> in <math>[1,\sqrt{n}]</math> with <math>\|rd/n\|\leq n^{-1/2}</math>, where <math>\|\cdot\|</math> means the distance to the nearest integer. This is a classic result of diophantine approximation (Dirichlet's theorem), whose proof is just the pigeonhole principle. Setting <math>m=\frac15 \sqrt n</math>, we find | We will want <math>\alpha_r</math> to be bounded away from 0. To that end for each <math>r</math>, fix now and forever <math>d</math> in <math>[1,\sqrt{n}]</math> with <math>\|rd/n\|\leq n^{-1/2}</math>, where <math>\|\cdot\|</math> means the distance to the nearest integer. This is a classic result of diophantine approximation (Dirichlet's theorem), whose proof is just the pigeonhole principle. Setting <math> \scriptstyle m=\frac15 \sqrt n</math>, we find | ||
:<math>\alpha_r = \frac{\sin\left((2m+1)\pi rd/n\right)}{\sin\left(\pi rd/n\right)} \geq \frac{2\cdot\left((2m+1)rd/n\right)}{\pi rd/n}>m,</math> | :<math>\alpha_r = \frac{\sin\left((2m+1)\pi rd/n\right)}{\sin\left(\pi rd/n\right)} \geq \frac{2\cdot\left((2m+1)rd/n\right)}{\pi rd/n}>m,</math> | ||
and more obviously <math>\alpha_r\leq 2m+1</math>. In short <math>\alpha_r</math> has order <math>\sqrt{n}</math>, to within a constant factor. | and more obviously <math> \scriptstyle \alpha_r\leq 2m+1</math>. In short <math> \scriptstyle \alpha_r</math> has order <math> \scriptstyle \sqrt{n}</math>, to within a constant factor. | ||
*:<math> I = n^{-1} \sum_r \omega_r \overline{\omega_r}^T = n^{-1} \sum_r \alpha_r^{-2}\left(\sum_x \omega^{rx}P_{d,x}\right)\left(\sum_y \omega^{-ry}P_{d,y}^T\right) = \sum_{r,x,y} n^{-1} \alpha_r^{-2} \omega^{r(x-y)} P_{d,x}P_{d,y}^T </math> | *:<math> I = n^{-1} \sum_r \omega_r \overline{\omega_r}^T = n^{-1} \sum_r \alpha_r^{-2}\left(\sum_x \omega^{rx}P_{d,x}\right)\left(\sum_y \omega^{-ry}P_{d,y}^T\right) = \sum_{r,x,y} n^{-1} \alpha_r^{-2} \omega^{r(x-y)} P_{d,x}P_{d,y}^T </math> | ||
Line 42: | Line 56: | ||
is easily recognized as a sum with <math>n^3</math> summands, each of which has size <math>\approx n^{-2}</math>, so that the total is not <math>o(n)</math>. In short, we need to find some cancelation. | is easily recognized as a sum with <math>n^3</math> summands, each of which has size <math>\approx n^{-2}</math>, so that the total is not <math>o(n)</math>. In short, we need to find some cancelation. | ||
* Set <math>R(\Delta)=\{r \colon d=\Delta\}</math>. By a pigeonhole argument, <math>|R(\Delta)|\leq 2\sqrt{n}</math>. The coefficient of <math>P_{\Delta,x}P_{\Delta,y}^T</math> is <math>\sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)}.</math> | * Set <math>R(\Delta)=\{r \colon d=\Delta\}</math>. By a pigeonhole argument, <math> \scriptstyle |R(\Delta)|\leq 2\sqrt{n}</math>. The coefficient of <math>P_{\Delta,x}P_{\Delta,y}^T</math> is <math>\sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)}.</math> | ||
We need to bound | We need to bound | ||
:<math> \sum_{\Delta=1}^{\sqrt{n}} \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right|.</math> | |||
*:<math> | *:<math> | ||
Line 50: | Line 64: | ||
\leq \left(\sum_{x,y} 1^2 \right)^{1/2} \left( \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right|^2\right)^{1/2}</math> | \leq \left(\sum_{x,y} 1^2 \right)^{1/2} \left( \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right|^2\right)^{1/2}</math> | ||
:<math>= n \,\cdot\,\left(\sum_{x,y} \sum_{r_1,r_2} n^{-2} \alpha_{r_1}^{-2} \alpha_{r_2}^{-2} \omega^{(r_1-r_2)(x-y)}\right)^{1/2} = n\,\cdot\, \left( \sum_{r} n^{-2}\alpha_r^{-4} n^2\right)^{1/2}</math> | :<math>= n \,\cdot\,\left(\sum_{x,y} \sum_{r_1,r_2} n^{-2} \alpha_{r_1}^{-2} \alpha_{r_2}^{-2} \omega^{(r_1-r_2)(x-y)}\right)^{1/2} = n\,\cdot\, \left( \sum_{r} n^{-2}\alpha_r^{-4} n^2\right)^{1/2}</math> | ||
where the last equality is by noting that <math>\sum_{x,y}\omega^{(r_1-r_2)(x-y)}</math> is either <math>n^2</math> (if <math>r_1-r_2=0</math>) or 0 (if <math>r_1-r_2\not=0</math>). Since <math>\alpha_r</math> is between <math>m</math> and <math>2m+1</math> (recall <math>m=\frac15 \sqrt n</math>), we have <math>\alpha_r^{-4} \leq m^{-4} = \frac{5^4}{n^2}</math>. Thus, | where the last equality is by noting that <math>\sum_{x,y}\omega^{(r_1-r_2)(x-y)}</math> is either <math>n^2</math> (if <math> \scriptstyle r_1-r_2=0</math>) or 0 (if <math>r_1-r_2\not=0</math>). Since <math>\alpha_r</math> is between <math>m</math> and <math>2m+1</math> (recall <math> \scriptstyle m=\frac15 \sqrt n</math>), we have <math>\alpha_r^{-4} \leq m^{-4} = \frac{5^4}{n^2}</math>. Thus, | ||
<math> n\,\cdot\, \left( \sum_{r} n^{-2}\alpha_r^{-4} n^2\right)^{1/2} \leq 25 |R(\Delta)|^{1/2}\leq 25\sqrt{2} n^{1/4},</math> | :<math> n\,\cdot\, \left( \sum_{r} n^{-2}\alpha_r^{-4} n^2\right)^{1/2} \leq 25 |R(\Delta)|^{1/2}\leq 25\sqrt{2} n^{1/4},</math> | ||
and so | and so | ||
:<math> \sum_{\Delta=1}^{\sqrt{n}} \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right|\leq 25\sqrt{2} n^{3/4}.</math> | :<math> \sum_{\Delta=1}^{\sqrt{n}} \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right|\leq 25\sqrt{2} n^{3/4}.</math> | ||
* Returning to step 2, we have <math>\delta^2 \geq \frac{1}{25\sqrt{2}} n^{1/4}</math>, whence <math>\delta > \frac 17 n^{1/8}</math>. | * Returning to step 2, we have <math>\delta^2 \geq \frac{1}{25\sqrt{2}} n^{1/4}</math>, whence <math>\delta > \frac 17 n^{1/8}</math>. |
Revision as of 16:16, 15 May 2010
Letting [math]\displaystyle{ \scriptstyle \mathcal{AP}(N) }[/math] be the collection of all (not necessarily homogeneous) arithmetic progressions in [math]\displaystyle{ \{1, 2, ..., N\} }[/math], Roth (1964) proved that [math]\displaystyle{ \scriptstyle \delta(\mathcal{AP}(N),{\mathbf x}) \geq c n^{1/4} }[/math], independent of the sequence [math]\displaystyle{ \scriptstyle {\mathbf x} }[/math].
- Roth, K. F., Remark concerning integer sequences. Acta Arith. 9 1964 257--260. MR0168545 (29 #5806)
In 2000, László Lovász gave another proof using semidefinite optimization.
- Lovász, László, Integer sequences and semidefinite programming. Dedicated to Professor Kálmán Gyory on the occasion of his 60th birthday.
Publ. Math. Debrecen 56 (2000), no. 3-4, 475--479. MR1765994 (2001k:11020)
Matoušek and Spencer (1996) have proven that the exponent [math]\displaystyle{ 1/4 }[/math] is best possible, although their construction is not constructive.
- Matoušek, Jirí; Spencer, Joel, Discrepancy in arithmetic progressions. J. Amer. Math. Soc. 9 (1996), no. 1, 195--204. MR1311824 (96c:11089)
``Representation of the identity proof of [math]\displaystyle{ \scriptstyle \gg n^{1/8} }[/math]
This is an elaboration of the outline in this post.
We break the proof down into 9 easy pieces.
- First, we consider APs modulo [math]\displaystyle{ n }[/math] for a large [math]\displaystyle{ n }[/math]. At the end, we'll have proven the existence of an AP with diameter smaller than n, so that if it wraps around it does so only once, and so at least half of the discrepancy is on a non-wrap-around AP. Set [math]\displaystyle{ \omega=e^{2\pi i /n} }[/math]. Let [math]\displaystyle{ \scriptstyle {\mathbf x} }[/math] be a [math]\displaystyle{ \scriptstyle \pm1 }[/math] column vector (our coloring!), and let [math]\displaystyle{ \delta }[/math] be the maximum discrepancy of [math]\displaystyle{ \scriptstyle {\mathbf x} }[/math] along any AP. Summations are over all of [math]\displaystyle{ {\mathbb Z}_n }[/math] unless noted otherwise, or obviously otherwise.
- Let [math]\displaystyle{ P_i }[/math] be the indicator vector of the [math]\displaystyle{ i }[/math]-th AP. Suppose that we have some [math]\displaystyle{ \lambda_{i,j} }[/math] with
- [math]\displaystyle{ I = \sum_{i,j} \lambda_{i,j} P_i P_j^T, }[/math]
where [math]\displaystyle{ I }[/math] is the [math]\displaystyle{ \scriptstyle n\times n }[/math] identity matrix. Then
- [math]\displaystyle{ n={\mathbf x}^T {\mathbf x} = {\mathbf x}^T I {\mathbf x} ={\mathbf x}^T\left(\sum \lambda_{i,j} P_i P_j^T\right) {\mathbf x} =\sum_{i,j} \lambda ({\mathbf x}^T P_i) (P_j^T {\mathbf x}) \leq \delta^2 \sum |\lambda_{i,j}|. }[/math]
So, if [math]\displaystyle{ \sum |\lambda_{i,j}| = o(n) }[/math], necessarily [math]\displaystyle{ \delta\to\infty }[/math]. This simplifies the problem considerably: can the identity matrix be written in the form [math]\displaystyle{ \sum \lambda_{i,j} P_i P_j^T }[/math] with small [math]\displaystyle{ \lambda_{i,j} }[/math]? We no longer need concern ourselves with the vector [math]\displaystyle{ \scriptstyle {\mathbf x} }[/math].
- Set [math]\displaystyle{ P_{d,a} }[/math] to be the indicator vector of the AP with length [math]\displaystyle{ 2m+1 }[/math] (with [math]\displaystyle{ m }[/math] to be determined later), difference [math]\displaystyle{ d\gt 1 }[/math], and center [math]\displaystyle{ a }[/math]. Set [math]\displaystyle{ \omega_r(t)=\omega^{rt} }[/math], a column vector whose [math]\displaystyle{ t }[/math]-th component is [math]\displaystyle{ \omega^{rt} }[/math]. Note that
- [math]\displaystyle{ I = \frac1n \sum_r \omega_r \overline{\omega_r}^T. }[/math]
We now proceed to write [math]\displaystyle{ \omega_r }[/math] as a sum of [math]\displaystyle{ P_{d,a} }[/math].
- We express a sum in two ways:
- [math]\displaystyle{ \sum_s P_{d,0}(s) \omega^{r(t-s)} = \omega^{rt} \sum_s P_{d,0}\omega^{-rs}=\omega^{rt} \hat{P}_{d,0}(-r) }[/math]
and
- [math]\displaystyle{ \sum_s P_{d,0}(s) \omega^{r(t-s)} = \sum_s P_{d,0}(t-s) \omega^{rs}=\sum_s \omega^{rs}P_{d,s}(t) }[/math]
Thus,
- [math]\displaystyle{ \omega_r = \frac{1}{\hat{P}_{d,0}(-r)} \sum_s \omega^{rs} P_{d,s}, }[/math]
the promised sum-of-APs representation of [math]\displaystyle{ \omega_r }[/math], and consequently the promised representation of [math]\displaystyle{ I }[/math] as a sum of [math]\displaystyle{ P_{d,s_1} P_{d,s_2}^T }[/math]. All that remains is to analyze the coefficients of the representation of [math]\displaystyle{ I }[/math].
- As [math]\displaystyle{ \hat{P}_{d,0}(-r)=\sum_s P_{d,0}\omega^{-rs} }[/math], and [math]\displaystyle{ P_{d,0} }[/math] is symmetric about 0, we see that [math]\displaystyle{ \hat{P}_{d,0}(-r)=\hat{P}_{d,0}(r) }[/math] is real. Moreover, as it is a geometric series, it is routine to find
- [math]\displaystyle{ \alpha_r:=\hat{P}_{d,0}(-r)=\frac{\sin\left((2m+1)\pi rd/n\right)}{\sin\left(\pi rd/n\right)} }[/math]
We will want [math]\displaystyle{ \alpha_r }[/math] to be bounded away from 0. To that end for each [math]\displaystyle{ r }[/math], fix now and forever [math]\displaystyle{ d }[/math] in [math]\displaystyle{ [1,\sqrt{n}] }[/math] with [math]\displaystyle{ \|rd/n\|\leq n^{-1/2} }[/math], where [math]\displaystyle{ \|\cdot\| }[/math] means the distance to the nearest integer. This is a classic result of diophantine approximation (Dirichlet's theorem), whose proof is just the pigeonhole principle. Setting [math]\displaystyle{ \scriptstyle m=\frac15 \sqrt n }[/math], we find
- [math]\displaystyle{ \alpha_r = \frac{\sin\left((2m+1)\pi rd/n\right)}{\sin\left(\pi rd/n\right)} \geq \frac{2\cdot\left((2m+1)rd/n\right)}{\pi rd/n}\gt m, }[/math]
and more obviously [math]\displaystyle{ \scriptstyle \alpha_r\leq 2m+1 }[/math]. In short [math]\displaystyle{ \scriptstyle \alpha_r }[/math] has order [math]\displaystyle{ \scriptstyle \sqrt{n} }[/math], to within a constant factor.
- [math]\displaystyle{ I = n^{-1} \sum_r \omega_r \overline{\omega_r}^T = n^{-1} \sum_r \alpha_r^{-2}\left(\sum_x \omega^{rx}P_{d,x}\right)\left(\sum_y \omega^{-ry}P_{d,y}^T\right) = \sum_{r,x,y} n^{-1} \alpha_r^{-2} \omega^{r(x-y)} P_{d,x}P_{d,y}^T }[/math]
It is tempting to try and be done the easy way, but [math]\displaystyle{ \sum_{r,x,y} \left|n^{-1} \alpha_r^{-2} \omega^{r(x-y)} \right| }[/math] is easily recognized as a sum with [math]\displaystyle{ n^3 }[/math] summands, each of which has size [math]\displaystyle{ \approx n^{-2} }[/math], so that the total is not [math]\displaystyle{ o(n) }[/math]. In short, we need to find some cancelation.
- Set [math]\displaystyle{ R(\Delta)=\{r \colon d=\Delta\} }[/math]. By a pigeonhole argument, [math]\displaystyle{ \scriptstyle |R(\Delta)|\leq 2\sqrt{n} }[/math]. The coefficient of [math]\displaystyle{ P_{\Delta,x}P_{\Delta,y}^T }[/math] is [math]\displaystyle{ \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)}. }[/math]
We need to bound
- [math]\displaystyle{ \sum_{\Delta=1}^{\sqrt{n}} \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right|. }[/math]
- [math]\displaystyle{ \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right| \leq \left(\sum_{x,y} 1^2 \right)^{1/2} \left( \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right|^2\right)^{1/2} }[/math]
- [math]\displaystyle{ = n \,\cdot\,\left(\sum_{x,y} \sum_{r_1,r_2} n^{-2} \alpha_{r_1}^{-2} \alpha_{r_2}^{-2} \omega^{(r_1-r_2)(x-y)}\right)^{1/2} = n\,\cdot\, \left( \sum_{r} n^{-2}\alpha_r^{-4} n^2\right)^{1/2} }[/math]
where the last equality is by noting that [math]\displaystyle{ \sum_{x,y}\omega^{(r_1-r_2)(x-y)} }[/math] is either [math]\displaystyle{ n^2 }[/math] (if [math]\displaystyle{ \scriptstyle r_1-r_2=0 }[/math]) or 0 (if [math]\displaystyle{ r_1-r_2\not=0 }[/math]). Since [math]\displaystyle{ \alpha_r }[/math] is between [math]\displaystyle{ m }[/math] and [math]\displaystyle{ 2m+1 }[/math] (recall [math]\displaystyle{ \scriptstyle m=\frac15 \sqrt n }[/math]), we have [math]\displaystyle{ \alpha_r^{-4} \leq m^{-4} = \frac{5^4}{n^2} }[/math]. Thus,
- [math]\displaystyle{ n\,\cdot\, \left( \sum_{r} n^{-2}\alpha_r^{-4} n^2\right)^{1/2} \leq 25 |R(\Delta)|^{1/2}\leq 25\sqrt{2} n^{1/4}, }[/math]
and so
- [math]\displaystyle{ \sum_{\Delta=1}^{\sqrt{n}} \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right|\leq 25\sqrt{2} n^{3/4}. }[/math]
- Returning to step 2, we have [math]\displaystyle{ \delta^2 \geq \frac{1}{25\sqrt{2}} n^{1/4} }[/math], whence [math]\displaystyle{ \delta \gt \frac 17 n^{1/8} }[/math].