Skip to content

Expander graphs VI: reducing randomness

by Michael Nielsen on June 19, 2005

Back from Boston! This is the final installment in my series about expanders. I’ll post a pdf containing the whole text in the next day or two. Thanks to everyone who’s contributed in the comments!

Today’s post explains how expander graphs can be used to reduce the number of random bits needed by a randomized algorithm in order to achieve a desired success probability. This post is the culmination of the series: we make use of the fact, proved in the last post, that random walks on an expander are exponentially unlikely to remain localized in any sufficiently large subset of vertices, a fact that relies in turn on the connection, developed in earlier posts, between the eigenavlue gap and the expansion parameter.

Note: This post is one in a series introducing one of the deepest ideas in modern computer science, expander graphs. Expanders are one of those powerful ideas that crop up in many apparently unrelated contexts, and that have a phenomenally wide range of uses. The goal of the posts is to explain what an expander is, and to learn just enough about them that we can start to understand some of their amazing uses. The posts require a little background in graph theory, computer science, linear algebra and Markov chains (all at about the level of a first course) to be comprehensible. I am not an expert on expanders, and the posts are just an introduction. They are are mostly based on some very nice 2003 lecture notes by Nati Linial and Avi Wigderson, available on the web at http://www.math.ias.edu/~boaz/ExpanderCourse/.

Reducing the number of random bits required by an algorithm

One surprising application of expanders is that they can be used to reduce the number of random bits needed by a randomized algorithm in order to achieve a desired success probability.

Suppose, for example, that we are trying to compute a function f(x) that can take the values f(x) = 0 or f(x) = 1. Suppose we have a randomized algorithm A(x,Y) which takes as input x and an m-bit uniformly distributed random variable Y, and outputs either 0 or 1. We assume that:

  • f(x) = 0 implies A(x,Y) = 0 with certainty.
  • f(x) = 1 implies A(x,Y) = 1 with probability at least 1-p_f.

That is, p_f is the maximal probability that the algorithm fails, in the case when f(x) = 1, but A(x,Y) = 0 is output by the algorithm.

An algorithm of this type is called a one-sided randomized algorithm, since it can only fail when f(x) = 1, not when f(x) = 0. I won’t give any concrete examples of one-sided randomized algorithms here, but the reader unfamiliar with them should rest assured that they are useful and important – see, e.g., the book of Motwani and Raghavan (Cambridge University Press, 1995) for examples.

As an aside, the discussion of one-sided algorithms in this post can be extended to the case of randomized algorithms which can fail when either f(x) = 0 or f(x) = 1. The details are a little more complicated, but the basic ideas are the same. This is described in Linial and Wigderson’s lecture notes. Alternately, extending the discussion to this case is a good problem.

How can we descrease the probability of failure for a one-sided randomized algoerithm? One obvious way of decreasing the failure probability is to run the algorithm k times, computing A(x,Y_0),A(x,Y_1),\ldots,A(x,Y_{k-1}). If we get A(x,Y_j) = 0 for all j then we output 0, while if A(x,Y_j) = 1 for at least one value of J, then we output f(x) = 1. This algorithm makes use of km bits, and reduces the failure probability to at most p_f^k.

Expanders can be used to substantially decrease the number of random bits required to achieve such a reduction in the failure probability. We define a new algorithm \tilde A as follows. It requires a d-regular expander graph G whose vertex set V contains 2^m vertices, each of which can represent a possible m-bit input y to A(x,y). The modified algorithm \tilde A works as follows:

  • Input x.
  • Sample uniformly at random from V to generate Y_0.
  • Now do a k-1 step random walk on the expander, generating random variables Y_1,\ldots, Y_{k-1}.
  • Compute A(x,Y_0),\ldots,A(x,Y_{k-1}). If any of these are 1, output 1, otherwise output 0.

We see that the basic idea of the algorithm is similar to the earlier proposal for running A(x,Y) repeatedly, but the sequence of independent and uniformly distributed samples Y_0,\ldots,Y_{k-1} is replaced by a random walk on the expander. The advantage of doing this is that only m+k \log(d) random bits are required – m to sample from the initial uniform distribution, and then \log(d) for each step in the random walk. When d is a small constant this is far fewer than the km bits used when we simply repeatedly run the algorithm A(x,Y_j) with uniform and independently generated random bits Y_j.

With what probability does this algorithm fail? Define B_x to be the set of values of y such that A(x,y) = 0, yet f(x) = 1. This is the “bad” set, which we hope our algorithm will avoid. The algorithm will fail only if the steps in the random walk Y_0,Y_1,\ldots,Y_{k-1} all fall within B_x. From our earlier theorem we see that this occurs with probability at most:

   \left( \frac{|B_x|}{2^m} + \frac{\lambda_2(G)}{d} \right)^{k-1}.

But we know that |B_x|/2^m \leq p_f, and so the failure probability is at most

   \left( p_f + \frac{\lambda_2(G)}{d} \right)^{k-1}.

Thus, provided p_f+\lambda_2(G)/d < 1, we again get an exponential decrease in the failure probability as the number of repetitions k is increased. Conclusion These notes have given a pretty basic introduction to expanders, and there's much we haven't covered. More detail and more applications can be found in the online notes of Linial and Wigderson, or in the research literature. Still, I hope that these notes have given some idea of why these families of graphs are useful, and of some of the powerful connections between graph theory, linear algebra, and random walks.

From → General

One Comment
  1. A Scott Crawford permalink

    It’d be helpful to have the Mathcad analytics and sub-routines in addition to the numeric proofs for a full comparison. To be honest I lose tract of bounded or defined variables if they’re left unstated… my problem, not yours.

Comments are closed.