http://michaelnielsen.org/polymath1/index.php?title=Proposals_for_finding_long_low-discrepancy_sequences&feed=atom&action=historyProposals for finding long low-discrepancy sequences - Revision history2019-12-12T21:52:15ZRevision history for this page on the wikiMediaWiki 1.23.5http://michaelnielsen.org/polymath1/index.php?title=Proposals_for_finding_long_low-discrepancy_sequences&diff=2692&oldid=prevGowers: Created page2010-01-13T11:19:25Z<p>Created page</p>
<p><b>New page</b></p><div>===Introduction===<br />
<br />
One of the interesting subquestions of the Erd&#337;s discrepancy problem is whether the minimum discrepancy of a <math>\pm</math> sequence of length n must grow logarithmically with n or whether it can be slower. There are two (and perhaps more) natural ways of approaching this question. The more obvious is to try to think of a clever construction that works. However, although this is the approach that gives us sequences with logarithmic growth of discrepancy, it seems rather unlikely that it can be pushed further, especially in the light of the fact that the sequences that have been generated by computer do not have a structure that we fully understand.<br />
<br />
A more promising approach is to try to think of a (possibly randomized) algorithm for producing long sequences with low discrepancy and to try to prove facts about its performance. But even before doing that, one would ideally like to implement the algorithm and see whether it appears to perform well. Of course, there is a trade-off between performance on the one hand and efficiency and analysability on the other: the hope is to find an algorithm that works significantly faster than a brute-force (depth-first) algorithm, but still produces long sequences with low discrepancy. For instance, if one could find an algorithm that very quickly yielded a sequence of length 1,000 and discrepancy 3, and very quickly yielded another sequence of length 100,000 and discrepancy 4, then it would be a promising candidate for a proof that the growth could be sublogarithmic. (The precise numbers here are open to debate.)<br />
<br />
===A first approach: sophisticated greed===<br />
<br />
A purely greedy algorithm would be to choose each term in the sequence in a way that minimizes the discrepancy of the sequence so far, given the previous choices that have been made. It would be interesting to know how such an algorithm performs, but a bit of hand calculation suggests that it would not do very well.<br />
<br />
A sophisticated greedy algorithm would be one that takes more into account when the choices are made. For example, suppose that you are choosing <math>x_n</math> and are trying to keep the discrepancy below 2. It may be that your choice is not forced. However, it may also be that the choices of <math>x_{n+1},x_{n+2},x_{n+3}</math> and <math>x_{n+4}</math> <em>are</em> forced and are all forced to be -1. In that case, just looking ahead a little bit tells you that you should choose <math>x_n</math> to be 1. So here is how one might define a sophisticated greedy algorithm.<br />
<br />
<strong>Definition.</strong> <em>A sophisticated greedy algorithm is one that builds up a sequence term by term. If <math>x_1,...,x_n</math> have been chosen, then for all m &gt; n and for all d|m one makes a note of the partial sum <math>x_d+x_{2d}+...+x_{kd}</math>, where <math>k=\lfloor n/d\rfloor</math>. One then looks at the effect that the choice of <math>x_{n+1}</math> would have on these partial sums and makes the choice in order to optimize that effect in some way.</em><br />
<br />
It is not completely obvious what "optimize" means. However, if one is trying to keep the discrepancy below C, then it is crucial that at the point that <math>x_m</math> is chosen, we do not have two HAP sums that end at m, one equal to C and one equal to -C. So our aim is to avoid that. When we choose <math>x_n</math> we will have a list of "dangerous" values of m, and an order of priority for dealing with them (based on how many chances we will have to do so, which depends on whether the HAPs in question have large or small common difference).</div>Gowers