Proposals for finding long low-discrepancy sequences

From Polymath Wiki
Revision as of 03:19, 13 January 2010 by Gowers (talk | contribs) (Created page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Introduction

One of the interesting subquestions of the Erdős discrepancy problem is whether the minimum discrepancy of a [math]\displaystyle{ \pm }[/math] sequence of length n must grow logarithmically with n or whether it can be slower. There are two (and perhaps more) natural ways of approaching this question. The more obvious is to try to think of a clever construction that works. However, although this is the approach that gives us sequences with logarithmic growth of discrepancy, it seems rather unlikely that it can be pushed further, especially in the light of the fact that the sequences that have been generated by computer do not have a structure that we fully understand.

A more promising approach is to try to think of a (possibly randomized) algorithm for producing long sequences with low discrepancy and to try to prove facts about its performance. But even before doing that, one would ideally like to implement the algorithm and see whether it appears to perform well. Of course, there is a trade-off between performance on the one hand and efficiency and analysability on the other: the hope is to find an algorithm that works significantly faster than a brute-force (depth-first) algorithm, but still produces long sequences with low discrepancy. For instance, if one could find an algorithm that very quickly yielded a sequence of length 1,000 and discrepancy 3, and very quickly yielded another sequence of length 100,000 and discrepancy 4, then it would be a promising candidate for a proof that the growth could be sublogarithmic. (The precise numbers here are open to debate.)

A first approach: sophisticated greed

A purely greedy algorithm would be to choose each term in the sequence in a way that minimizes the discrepancy of the sequence so far, given the previous choices that have been made. It would be interesting to know how such an algorithm performs, but a bit of hand calculation suggests that it would not do very well.

A sophisticated greedy algorithm would be one that takes more into account when the choices are made. For example, suppose that you are choosing [math]\displaystyle{ x_n }[/math] and are trying to keep the discrepancy below 2. It may be that your choice is not forced. However, it may also be that the choices of [math]\displaystyle{ x_{n+1},x_{n+2},x_{n+3} }[/math] and [math]\displaystyle{ x_{n+4} }[/math] are forced and are all forced to be -1. In that case, just looking ahead a little bit tells you that you should choose [math]\displaystyle{ x_n }[/math] to be 1. So here is how one might define a sophisticated greedy algorithm.

Definition. A sophisticated greedy algorithm is one that builds up a sequence term by term. If [math]\displaystyle{ x_1,...,x_n }[/math] have been chosen, then for all m > n and for all d|m one makes a note of the partial sum [math]\displaystyle{ x_d+x_{2d}+...+x_{kd} }[/math], where [math]\displaystyle{ k=\lfloor n/d\rfloor }[/math]. One then looks at the effect that the choice of [math]\displaystyle{ x_{n+1} }[/math] would have on these partial sums and makes the choice in order to optimize that effect in some way.

It is not completely obvious what "optimize" means. However, if one is trying to keep the discrepancy below C, then it is crucial that at the point that [math]\displaystyle{ x_m }[/math] is chosen, we do not have two HAP sums that end at m, one equal to C and one equal to -C. So our aim is to avoid that. When we choose [math]\displaystyle{ x_n }[/math] we will have a list of "dangerous" values of m, and an order of priority for dealing with them (based on how many chances we will have to do so, which depends on whether the HAPs in question have large or small common difference).