Skip to content

Expander graphs V: random walks on expanders

by Michael Nielsen on June 6, 2005

Back to expanders again. Today’s post looks at a simple application of expanders, showing that a random walk on an expander graph is likely to quickly escape from any sufficiently small subset of vertices. Intuitively, of course, this result is not surprising, but the exact quantitative form of the result turns out to be extremely useful in the next post, which is about decreasing the number of random bits used by a randomized algorithm.

Note: This post is one in a series introducing one of the deepest ideas in modern computer science, expander graphs. Expanders are one of those powerful ideas that crop up in many apparently unrelated contexts, and that have a phenomenally wide range of uses. The goal of the posts is to explain what an expander is, and to learn just enough about them that we can start to understand some of their amazing uses. The posts require a little background in graph theory, computer science, linear algebra and Markov chains (all at about the level of a first course) to be comprehensible. I am not an expert on expanders, and the posts are just an introduction. They are are mostly based on some very nice 2003 lecture notes by Nati Linial and Avi Wigderson, available on the web at

Random walks on expanders

Many applications of expanders involve doing a random walk on the expander. We start at some chosen vertex, and then repeatedly move to any one of the d neighbours, each time choosing a neighbour uniformly at random, and independently of prior choices.

To describe this random walk, suppose at some given time we have a probability distribution p describing the probability of being at any given vertex in the graph G. We then apply one step of the random walk procedure described above, i.e., selecting a neighbour of the current vertex uniformly at random. The updated probability distribution is easily verified to be:

   \tilde p = \frac{A(G)}{d} p.

That is, the Markov transition matrix describing this random walk is just \hat A(G) \equiv A(G)/d, i.e., up to a constant of proportionality the transition matrix is just the adjacency matrix. This relationship between the adjacency matrix and random walks opens up a whole new world of connections between graphs and Markov chains.

One of the most important connections is between the eigenvalues of Markov transition matrices and the rate at which the Markov chain converges to its stationary distribution. In particular, the following beautiful theorem tells us that when the uniform distribution is a stationary distribution for the chain, then the Markov chain converges to the uniform distribution exponentially quickly, at a rate determined by the second largest eigenvalue of M.

Exercise: Show that if M is a normal transition matrix for a Markov chain then 1 = \lambda_1(M) \geq \lambda_2(M) \geq \ldots.

Theorem: Suppose M is a normal transition matrix for a Markov chain on n states, with the uniform distribution u = \vec 1/n as a stationary point, M u = u. Then for any starting distribution p,

   \| M^t p – u \|_1 \leq \sqrt{n} \lambda_2(M)^t,

where \| \cdot \|_1 denotes the l_1 norm.

The normality condition in this theorem may appear a little surprising. The reason it’s there is to ensure that M can be diagonalized. The theorem can be made to work for general M, with the second largest eigenvalue replaced by the second largest singular value. However, in our situation M is symmetric, and thus automatically normal, and we prefer the statement in terms of eigenvalues, since it allows us to make a connection to the expansion parameter of a graph. In particular, when M = \hat A(G) we obtain:

   \| \hat A(G)^t p-u\|_1 \leq \sqrt{n} \left(\frac{\lambda_2(G)}{d}\right)^t.

Combining this with our earlier results connecting the gap to the expansion parameter, we deduce that

   \| \hat A(G)^t p-u\|_1 \leq \sqrt{n} \left(1-\frac{h(G)^2}{2d^2}\right)^t.

Thus, for a family of expander graphs, the rate of convergence of the Markov chain is exponentially fast in the number of time steps t.

Exercise: Suppose M is a transition matrix for a Markov chain. Show that the uniform distribution u is a stationary point point for the chain, i.e., Mu = u, if and only if M is doubly stochastic, i.e., has non-zero entries, and all rows and columns of the matrix sum to 1.

Proof: We start by working with the l_2 norm \| \cdot \|_2. Since Mu = u we have M^t u = u, and so:

   \|M^t p – u \|_2 = \|M^t(p-u) \|_2.

A computation shows that p-u is orthogonal to u. But u is an eigenvector of M with the maximum eigenvalue, 1, and thus p-u must lie in the span of the eigenspaces with eigenvalues \lambda_2(M),\lambda_3(M),\ldots. It follows that

   \|M^t(p-u)\|_2 \leq \lambda_2(M)^t \|p-u\|_2 \leq \lambda_2(M)^t,

where we used the fact that \| p-u\|_2 \leq 1, easily established by observing that \|p-u\|_2 is convex in p, and thus must be maximized at an extreme point in the space of probability distributions; the symmetry of u ensures that without loss of generality we may take p = (1,0,\ldots,0). To convert this into a result about the l_1 norm, we use the fact that in n dimensions \|v\|_1 \leq \sqrt{n} \|v\|_2, and thus we obtain

   \|M^t(p-u)\|_1 \leq \sqrt{n} \lambda_2(M)^t,

which was the desired result. QED

What other properties do random walks on expanders have? We now prove another beautiful theorem which tells us that they “move around quickly”, in the sense that they are exponentially unlikely to stay for long within a given subset of vertices, B, unless B is very large.

More precisely, suppose B is a subset of vertices, and we choose some vertex X_0 uniformly at random from the graph. Suppose we use X_0 as the starting point for a random walk, X_0,\ldots,X_t, where X_t is the vertex after the tth step. Let B(t) be the event that X_j \in B for all j in the range 0,\ldots,t. Then we will prove that:

   \mbox{Pr}(B(t)) \leq \left( \frac{\lambda_2(G)}{d} + \frac{|B|}{n}     \right)^t

Provided \lambda_2(G)/d + |B|/n < 1, we get the desired exponential decrease in probability. For a family of expander graphs it follows that there is some constant \epsilon > 0 such that we get an exponential decrease for any B such that |B|/n < \epsilon. These results are special cases of the following more general theorem about Markov chains. Theorem: Let X_0 be uniformly distributed on n states, and let X_0,\ldots,X_t be a time-homogeneous Markov chain with transition matrix M. Suppose the uniform distribution u is a stationary point of M, i.e., Mu = u. Let B be a subset of the states, and let B(t) be the event that X_j \in B for all j \in 0,\ldots,t. Then    \mbox{Pr}(B(t)) \leq \left( \lambda_2(M) + \frac{|B|}{n}     \right)^t. Proof: The first step in the proof is to observe that:    \mbox{Pr}(B(t)) = \|(PMP)^t P u \|_1, where the operator P projects onto the vector space spanned by those basis vectors corresponding to elements of B. This equation is not entirely obvious, and proving it is a good exercise for the reader. The next step is to prove that \| PMP \| \leq \lambda_2(M)+|B|/n, where the norm here is the operator norm. We will do this below, but note first that once this is done, the result follows, for we have    \mbox{Pr}(B(t)) = \| (PMP)^t P u \|_1 \leq \sqrt{n} \| (PMP)^t P u \|_2 by the standard inequality relating l_1 and l_2 norms, and thus    \mbox{Pr}(B(t)) \leq \sqrt{n} \| PMP \|^t \| P u \|_2, by definition of the operator norm, and finally    \mbox{Pr}(B(t)) \leq \left( \lambda_2(M)+\frac{|B|}{n} \right)^t, where we used the assumed inequality for the operator norm, and the observation that \| P u \|_2 = \sqrt{|B|}/n \leq 1/\sqrt{n}. To prove the desired operator norm inequality, \| PMP \| \leq \lambda_2(M)+|B|/n, suppose v is a normalized state such that \| PMP \| = |v^T PMP v|. Decompose Pv = \alpha u + \beta u_\perp, where u_\perp is a normalized state orthogonal to u. Since \|P v \|_2 \leq \|v \|_2 = 1 we must have |\beta| \leq 1. Furthermore, multiplying Pv = \alpha u + \beta u_\perp on the left by nu^T shows that \alpha = n u^T P v. It follows that |\alpha| is maximized by choosing v to be uniformly distributed over B, from which it follows that |\alpha| \leq \sqrt{|B|}. A little algebra shows that    v^T PMP v =  \alpha^2 u^T M u + \beta^2 u_\perp^T M u_\perp. Applying |\alpha| \leq \sqrt{|B|}, u^T M u = u^Tu = 1/n, |\beta| \leq 1, and u_\perp^T M u_\perp \leq \lambda_2(M) gives    v^T P M P v \leq \frac{|B|}{n} + \lambda_2(M), which completes the proof. QED The results in today's post are elegant, but qualitatively unsurprising. (Of course, having elegant quantitative statements of results that are qualitatively clear is worthwhile in its own right!) In the next post we'll use these ideas to develop a genuinely surprising application of expanders, to reducing the number of random bits required by a probabilistic algorithm in order to achieve a desired success probability.

From → General

Comments are closed.