Skip to content

Expander graphs IV: Expansion and the eigenvalue gap

by Michael Nielsen on June 1, 2005

I return to expanders today, with a discussion of expansion and eigenvalue gaps. Thanks to all the commenters on earlier posts for all the proof ideas and simplifications!

Note: This post is one in a series introducing one of the deepest ideas in modern computer science, expander graphs. Expanders are one of those powerful ideas that crop up in many apparently unrelated contexts, and that have a phenomenally wide range of uses. The goal of the posts is to explain what an expander is, and to learn just enough about them that we can start to understand some of their amazing uses. The posts require a little background in graph theory, computer science, linear algebra and Markov chains (all at about the level of a first course) to be comprehensible. I am not an expert on expanders, and the posts are just an introduction. They are are mostly based on some very nice 2003 lecture notes by Nati Linial and Avi Wigderson, available on the web at

In today’s post we really reap the benefits of recasting graph theory in linear algebraic terms, showing that the the expansion parameter can be related to the gap between the largest and second largest eigenvalue of the graph. This ought to surprise you, perhaps even shock you. It’s a highly non-trivial result, with the big payoff of relating something we didn’t really understand all that well (the expansion parameter) to eigenvalue gaps, about which an enormous amount it known.

Expansion and the eigenvalue gap

Let’s return our attention to expander graphs, and see what the eigenvalues of a graph have to do with its expansion parameter. We define the gap for the graph G to be the difference \Delta(G) \equiv \lambda_1(G)-\lambda_2(G) between the largest and second-largest eigenvalues. The expansion parameter and the gap are connected by the following theorem:

Theorem: The expansion parameter h(G) for a d-regular graph G is related to the gap \Delta(G) by:

   \frac{\Delta(G)}{2} \leq h(G) \leq \sqrt{2d \Delta(G)}.

Thus, properties of the eigenvalue gap can be used to deduce properties of the expansion parameter. For example, if the eigenvalue gap for a family of d-regular graphs is bounded below by a positive constant, then the expansion parameter must also be bounded below by a positive constant, and so the family is an expander.

One reason for finding the connection between the gap and the expansion parameter interesting is that it is far easier to estimate the gap of an n by n matrix than it is to enumerate the exponentially many subsets S of the vertex set V, and compute |\partial S|/|S| for each one.

Proof discussion: We already understand that \lambda_1(G) = d for this graph, with eigenvector \vec 1 = (1,1,\ldots,1). So we’ll concentrate on trying to understand the behaviour of the second largest eigenvalue, \lambda_2(G). The theorem tells us that the difference between d and \lambda_2(G) is controlled both above and below by the expansion parameter h(G).

How can we get control over the second largest eigenvalue of G? One way is to observe that \lambda_2(G) is just the maximum of the expression v^T A v / v^T v, where A is the adjacency matrix of G, and we maximize over all vectors v orthogonal to the eigenvector \vec 1. An encouraging fact is that this expression is quite easy to deal with, because the condition that v be orthogonal to \vec 1 is actually equivalent to the sum of v’s entries being equal to 0, so we have

   \lambda_2(G) = \max_{v: {\rm tr}(v) = 0} \frac{v^T A v}{v^T v},

where {\rm tr}(v) is just the sum of the entries of the vector v.

We’re going to provide a lower bound on \lambda_2(G) by simply guessing a good choice of v satisfying \mbox{tr}(v) = 0, and using the fact that

   \lambda_2(G) \geq \frac{v^T A v}{v^T v}.

To make a good guess, it helps to have a way of thinking about expressions like v^T A v, where \mbox{tr}(v) = 0. A convenient way of thinking is to rewrite v as the difference of two disjoint probability distributions, p and q, i.e., v = p-q, where p and q are non-negative vectors each summing to 1, and with disjoint support. This results in terms like p^T A q, which we can think of in terms of transition probabilities between q and p. This will allow us to apply the expansion properties of the graph.

Let’s make these ideas a little more concrete. The key is to define \vec 1_S to be the vector whose entries are 1 on S, and 0 elsewhere, and to observe that

   \vec 1_S^T A \vec 1_T = |E(S,T)|,

where |E(S,T)| is the number of edges between the vertex sets S and T. This suggests that we should choose p and q in terms of vectors like \vec 1_S, since it will enable us to relate expressions like v^T A v to the sizes of various edge sets, which, in turn, can be related to the expansion parameter.

Suppose in particular that we choose

   v = \frac{\vec 1_S}{|S|}-\frac{\vec 1_{\overline S}}{|\overline S|}.

This satisfies the condition \mbox{tr}(v) = 0, and gives

   v^T v = \frac{1}{|S|}+\frac{1}{|\overline S|}


   v^T A v = \frac{1}{|S|^2} E(S,S) + \frac{1}{|\overline S|^2} E(\overline S,\overline S) – \frac{2}{|S||\overline S|} E(S,\overline S).

The definition of an expander graph gives us control over E(S,\overline S), so it is convenient to rewrite E(S,S) and E(\overline S,\overline S) in terms of E(S,\overline S), using the d-regularity of the graph:

   E(S,S)+E(S,\overline S) = d |S|; \,\,\, E(\overline S,\overline S)+ E(S,\overline S) = d |\overline S|.

A straightforward substitution and a little algebra gives:

   v^T A v = d \left( \frac{1}{|S|}+\frac{1}{|\overline S|} \right)   – \left(\frac{1}{|S|}+\frac{1}{|\overline S|}\right)^2 E(S,\overline S).

Comparing with the earlier expression for the denominator v^T v, we obtain

   \lambda_2(G) \geq d-\left(\frac{1}{|S|}+\frac{1}{|\overline S|}\right)   E(S,\overline S).

Now choose S so that E(S,\overline S) = h(G) |S|, and |S| \leq n/2, giving after a little algebra

   \lambda_2(G) \geq d-2 h(G),

and thus

   \frac{\Delta(G)}{2} \leq h(G),

which was the first of the two desired inequalities in the theorem.

The proof of the second inequality is a little more complicated. Unfortunately, I haven’t managed to boil the proof down to a form that I’m really happy with, and for this reason I won’t describe the details. If you’re interested, you should try to prove it yourself, or refer to the notes of Linial and Wigderson.


Problem: Can we generalize this result so that it applies to a general undirected graph G, not just to d-regular graphs? Can we prove an analogous statement for directed graphs, perhaps in terms of singular values? Can we define a a generalized notion of “expansion” which can be applied to any symmetric matrix A with non-negative entries, and connect that notion of expansion to the eigenvalue gap? Can we generalize even further? What happens if we change the field over which the matrix is considered?

From → General

One Comment
  1. If graph is not regular then we can estimate lower bounds of the expansion factor using the following formula
    For a graph G and a set of vertices S \subset G

    \frac{|\delta S|}{|S|} \geq \frac{\lambda (2 – \lambda) \frac{|G \setminus S|}{|G|}}{1 – \lambda(2 – \lambda) \frac{|G \setminus S|}{|G|}}

    \geq \lambda(2-\lambda)\frac{|G \setminus S|}{|G|},

    where \lambda = 2 \lambda_1 /(\lambda_{n-1} + \lambda_1 )

    So, expansion factor is at least \lambda(2-\lambda)

    \lambda_1 \cdots \lambda_n are eigenvalues of the Laplasian
    \lambda_0 \leq \cdots \lambda_n

Comments are closed.