# Expander graphs VI: reducing randomness

Back from Boston! This is the final installment in my series about expanders. I’ll post a pdf containing the whole text in the next day or two. Thanks to everyone who’s contributed in the comments!

Today’s post explains how expander graphs can be used to reduce the number of random bits needed by a randomized algorithm in order to achieve a desired success probability. This post is the culmination of the series: we make use of the fact, proved in the last post, that random walks on an expander are exponentially unlikely to remain localized in any sufficiently large subset of vertices, a fact that relies in turn on the connection, developed in earlier posts, between the eigenavlue gap and the expansion parameter.

**Note:** This post is one in a series introducing one of the deepest ideas in modern computer science, *expander graphs*. Expanders are one of those powerful ideas that crop up in many apparently unrelated contexts, and that have a phenomenally wide range of uses. The goal of the posts is to explain what an expander is, and to learn just enough about them that we can start to understand some of their amazing uses. The posts require a little background in graph theory, computer science, linear algebra and Markov chains (all at about the level of a first course) to be comprehensible. I am not an expert on expanders, and the posts are just an introduction. They are are mostly based on some very nice 2003 lecture notes by Nati Linial and Avi Wigderson, available on the web at http://www.math.ias.edu/~boaz/ExpanderCourse/.

### Reducing the number of random bits required by an algorithm

One surprising application of expanders is that they can be used to reduce the number of random bits needed by a randomized algorithm in order to achieve a desired success probability.

Suppose, for example, that we are trying to compute a function that can take the values or . Suppose we have a randomized algorithm which takes as input and an -bit uniformly distributed random variable , and outputs either or . We assume that:

- implies with certainty.
- implies with probability at least .

That is, is the maximal probability that the algorithm fails, in the case when , but is output by the algorithm.

An algorithm of this type is called a *one-sided* randomized algorithm, since it can only fail when , not when . I won’t give any concrete examples of one-sided randomized algorithms here, but the reader unfamiliar with them should rest assured that they are useful and important – see, e.g., the book of Motwani and Raghavan (Cambridge University Press, 1995) for examples.

As an aside, the discussion of one-sided algorithms in this post can be extended to the case of randomized algorithms which can fail when either or . The details are a little more complicated, but the basic ideas are the same. This is described in Linial and Wigderson’s lecture notes. Alternately, extending the discussion to this case is a good problem.

How can we descrease the probability of failure for a one-sided randomized algoerithm? One obvious way of decreasing the failure probability is to run the algorithm times, computing . If we get for all then we output , while if for at least one value of , then we output . This algorithm makes use of bits, and reduces the failure probability to at most .

Expanders can be used to substantially decrease the number of random bits required to achieve such a reduction in the failure probability. We define a new algorithm as follows. It requires a -regular expander graph whose vertex set contains vertices, each of which can represent a possible -bit input to . The modified algorithm works as follows:

- Input .
- Sample uniformly at random from to generate .
- Now do a step random walk on the expander, generating random variables .
- Compute . If any of these are , output , otherwise output .

We see that the basic idea of the algorithm is similar to the earlier proposal for running repeatedly, but the sequence of independent and uniformly distributed samples is replaced by a random walk on the expander. The advantage of doing this is that only random bits are required – to sample from the initial uniform distribution, and then for each step in the random walk. When is a small constant this is far fewer than the bits used when we simply repeatedly run the algorithm with uniform and independently generated random bits .

With what probability does this algorithm fail? Define to be the set of values of such that , yet . This is the “bad” set, which we hope our algorithm will avoid. The algorithm will fail only if the steps in the random walk all fall within . From our earlier theorem we see that this occurs with probability at most:

But we know that , and so the failure probability is at most

Thus, provided , we again get an exponential decrease in the failure probability as the number of repetitions is increased. Conclusion These notes have given a pretty basic introduction to expanders, and there's much we haven't covered. More detail and more applications can be found in the online notes of Linial and Wigderson, or in the research literature. Still, I hope that these notes have given some idea of why these families of graphs are useful, and of some of the powerful connections between graph theory, linear algebra, and random walks.

Comments are closed.

It’d be helpful to have the Mathcad analytics and sub-routines in addition to the numeric proofs for a full comparison. To be honest I lose tract of bounded or defined variables if they’re left unstated… my problem, not yours.